From prometheanfire at gentoo.org Sat Sep 1 00:52:09 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 31 Aug 2018 19:52:09 -0500 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][ceilometer] all requirement updates are currently blocked Message-ID: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> The requirements project has a co-installability test for the various projects, networking-odl being included. Because of the way the dependancy on ceilometer is done it is blocking all reviews and updates to the requirements project. http://logs.openstack.org/96/594496/2/check/requirements-integration/8378cd8/job-output.txt.gz#_2018-08-31_22_54_49_357505 If networking-odl is not meant to be used as a library I'd recommend it's removal from networking-bgpvpn (it's test-requirements.txt file). Once that is done networking-odl can be removed from global-requirements and we won't be blocked anymore. As a side note, fungi noticed that when you branched you are still installing ceilometer from master. Also, the ceilometer team doesnt wish it to be used as a library either (like networking-odl doesn't wish to be used as a library). -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From dangtrinhnt at gmail.com Sat Sep 1 02:26:07 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sat, 1 Sep 2018 11:26:07 +0900 Subject: [openstack-dev] [Searchlight] Searchlight was moved to Storyboard Message-ID: Dear team, Our bug tracker and blueprints were moved to Storyboard. So from now on, please check them out here: https://storyboard.openstack.org/#!/project_group/93 Enjoy the whole new world :) *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From s at cassiba.com Sun Sep 2 17:41:18 2018 From: s at cassiba.com (Samuel Cassiba) Date: Sun, 2 Sep 2018 10:41:18 -0700 Subject: [openstack-dev] [chef] Retiring openstack/openstack-chef-repo Message-ID: Ohai! The entry point to Chef OpenStack, the openstack-chef-repo, is being retired in favor of openstack/openstack-chef. As such, the watch ends for openstack/openstack-chef-repo. >From a Chef perspective, openstack-chef-repo has been a perfectly adequate name, due to the prevalence of monorepos called 'chef-repo'. In the Chef ecosystem, this makes perfect sense back in 2014 or 2015. In more recent time, based on the outsider perspective from people who were not nearly as immersed in the nomenclature, "why do you call it repo?" had started to emerge as a FAQ. Both repositories were created with the same intent: the junction of OpenStack and Chef. However, openstack-chef existed before its time, boxed and packed away to the attic long before Chef OpenStack was even a notion. With the introduction of documentation being published to docs.o.o, it seemed like the logical time to migrate the entry point back to openstack/openstack-chef. With assistance from infra doing the heavy lifting for unretiring the project, openstack-chef was brought down from the attic and de-mothballed. At the time of this writing, no new changes are being merged to openstack-chef-repo, and its jobs are noop. Focus has shifted entirely to openstack/openstack-chef, with it being the entry point for Zuul jobs, as well as Kitchen scenarios and documentation. All stable jobs going back to stable/ocata have been migrated, with the exception of the Cinder cookbook's Ocata release. It no longer tests cleanly due to the detritus of time, so it will remain in its current state. The retirement festivities can be found at https://review.openstack.org/#/q/topic:retire-openstack-chef-repo If you have any questions or concerns, please don't hesitate to reach out. Best, Samuel Cassiba (scas) From tony at bakeyournoodle.com Mon Sep 3 03:18:49 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 3 Sep 2018 13:18:49 +1000 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][ceilometer] all requirement updates are currently blocked In-Reply-To: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> References: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> Message-ID: <20180903031848.GA6645@thor.bakeyournoodle.com> On Fri, Aug 31, 2018 at 07:52:09PM -0500, Matthew Thode wrote: > The requirements project has a co-installability test for the various > projects, networking-odl being included. > > Because of the way the dependancy on ceilometer is done it is blocking > all reviews and updates to the requirements project. > > http://logs.openstack.org/96/594496/2/check/requirements-integration/8378cd8/job-output.txt.gz#_2018-08-31_22_54_49_357505 > > If networking-odl is not meant to be used as a library I'd recommend > it's removal from networking-bgpvpn (it's test-requirements.txt file). > Once that is done networking-odl can be removed from global-requirements > and we won't be blocked anymore. > > As a side note, fungi noticed that when you branched you are still > installing ceilometer from master. Also, the ceilometer team > doesnt wish it to be used as a library either (like networking-odl > doesn't wish to be used as a library). Yup this seems totally wrong for anything to be importing ceilometer directly like that. The networking-* projects are pretty tightly coupled so the links there are ok and workable but the ceilometer thing needs to be reconsidered. Having said that it's been part of the design for a while now. The "quick" fix would be to have ceilometer published to pypi, get requirements.txt fixed in networking-odl and re-release that. In order to unblock the requirements gate we *could* block 13.0.0 in global-requirements but that's strange as that means we're installing the queens version instead of rocky, and will more than likely have a cascade effect :( https://review.openstack.org/599277 is my pragmatic compromise while we work through this. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From kevinzs2048 at gmail.com Mon Sep 3 05:56:49 2018 From: kevinzs2048 at gmail.com (Shuai Zhao) Date: Mon, 3 Sep 2018 13:56:49 +0800 Subject: [openstack-dev] [kuryr] Some questions about kuryr Message-ID: Hi daniel, As we know, there are two ways to deploy network for Pod-in-VM in openstack through kuryr, macvlan and trunk. Why do we just create ports in neutron and attach this port to VM then we can easily use eth* in VM to deploy network for Pod-in-VM? And if we use macvlan mode when VMs are running on overlay network, how could we resolve the l2-population? Best wishes to you ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Sep 3 06:27:10 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 03 Sep 2018 15:27:10 +0900 Subject: [openstack-dev] [grenade][osc][rocky]openstack client Rocky does not work with python-cinderclient Rocky Message-ID: <1659e1d14ab.e0557eed12908.421517959657508825@ghanshyammann.com> Hi All, While doing the grenade setting to test the Rocky upgrade testing [1], i found osc Rocky version (3.15 or 3.16) does not work with python-cinderclient Rocky version (>=4.0.0) [2]. Failure are due to source_replica arg has been removed from python-cinderclient which went in Rocky release and osc fix of that went in after Rocky. Openstackclient Rocky version - 3.16.0 cinderclient Rocky version - 4.0.1 These 2 version does not work because cinderclient >=4.0.0 has removed the source_replica arg which is being taken care in openstackclient > 3.16 [2] so openastackclient rocky version (3.15 or 3.16 does not work with cinderclient rocky version) We should backport the openstackclient fix [3] to Rocky and then release the osc version for Rocky. I have proposed the backport [4]. [1] https://review.openstack.org/#/c/591594 [2] http://logs.openstack.org/94/591594/2/check/neutron-grenade/b281347/logs/grenade.sh.txt.gz#_2018-09-03_01_29_36_289 [3] https://review.openstack.org/#/c/587005/ [4] https://review.openstack.org/#/c/599291/ -gmann From hugh at wherenow.org Mon Sep 3 07:30:54 2018 From: hugh at wherenow.org (Hugh Saunders) Date: Mon, 3 Sep 2018 08:30:54 +0100 Subject: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core In-Reply-To: References: Message-ID: Thanks for all your hard work on OSA Andy :) On Thu, 30 Aug 2018 at 18:41 Andy McCrae wrote: > Now that Rocky is all but ready it seems like a good time! Since changing > roles I've not been able to keep up enough focus on reviews and other > obligations - so I think it's time to step aside as a core reviewer. > > I want to say thanks to everybody in the community, I'm really proud to > see the work we've done and how the OSA team has grown. I've learned a > tonne from all of you - it's definitely been a great experience. > > Thanks, > Andy > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Sep 3 07:52:34 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 03 Sep 2018 16:52:34 +0900 Subject: [openstack-dev] [grenade][osc][rocky]openstack client Rocky does not work with python-cinderclient Rocky In-Reply-To: <1659e1d14ab.e0557eed12908.421517959657508825@ghanshyammann.com> References: <1659e1d14ab.e0557eed12908.421517959657508825@ghanshyammann.com> Message-ID: <1659e6b443e.11b4ac0f815838.2215829193456163595@ghanshyammann.com> ---- On Mon, 03 Sep 2018 15:27:10 +0900 Ghanshyam Mann wrote ---- > Hi All, > > While doing the grenade setting to test the Rocky upgrade testing [1], i found osc Rocky version (3.15 or 3.16) does not work with python-cinderclient Rocky version (>=4.0.0) [2]. > > Failure are due to source_replica arg has been removed from python-cinderclient which went in Rocky release and osc fix of that went in after Rocky. > > Openstackclient Rocky version - 3.16.0 > cinderclient Rocky version - 4.0.1 > > These 2 version does not work because cinderclient >=4.0.0 has removed the source_replica arg which is being taken care in openstackclient > 3.16 [2] so openastackclient rocky version (3.15 or 3.16 does not work with cinderclient rocky version) > > We should backport the openstackclient fix [3] to Rocky and then release the osc version for Rocky. I have proposed the backport [4]. > > [1] https://review.openstack.org/#/c/591594 > [2] http://logs.openstack.org/94/591594/2/check/neutron-grenade/b281347/logs/grenade.sh.txt.gz#_2018-09-03_01_29_36_289 > [3] https://review.openstack.org/#/c/587005/ > [4] https://review.openstack.org/#/c/599291/ > This should be detected in osc Rocky patches but seems like osc-functional-devstack job does not run for stable/rocky zuul.yaml or tox.ini only changes [1]. I am not sure why osc-functional-devstack job did not run for below patches. I did not find irrelevant-files regex which exclude those file. We can see stable/queens run the functional job for similar changes [2]. Is something wrong in job selection on zuul side ? [1] https://review.openstack.org/#/c/594306/ https://review.openstack.org/#/c/586005/ [2] https://review.openstack.org/#/c/594302/ > -gmann > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jbadiapa at redhat.com Mon Sep 3 08:21:36 2018 From: jbadiapa at redhat.com (Juan Badia Payno) Date: Mon, 3 Sep 2018 10:21:36 +0200 Subject: [openstack-dev] [Tripleo] fluentd logging status In-Reply-To: <3054db77-3ba3-3ad2-25f0-17c7a3fc8df0@redhat.com> References: <3054db77-3ba3-3ad2-25f0-17c7a3fc8df0@redhat.com> Message-ID: On Fri, Aug 31, 2018 at 3:08 PM, Juan Antonio Osorio Robles < jaosorior at redhat.com> wrote: > Logging is a topic that I think should get more love on the TripleO side. > > On 08/24/2018 12:17 PM, Juan Badia Payno wrote: > > Recently, I did a little test regarding fluentd logging on the gates > master[1], queens[2], pike [3]. I don't like the status of it, I'm still > working on them, but basically there are quite a lot of misconfigured logs > and some services that they are not configured at all. > > I think we need to put some effort on the logging. The purpose of this > email is to point out that we need to do a little effort on the task. > > First of all, I think we need to enable fluentd on all the scenarios, as > it is on the tests [1][2][3] commented on the beginning of the email. Once > everything is ok and some automatic test regarding logging is done they can > be disabled. > > Wes, do you have an opinion about this? I think it would be a good idea to > avoid these types of regressions. > > > I'd love not to create a new bug for every misconfigured/unconfigured > service, but if requested to grab more attention on it, I will open it. > > One bug to fix all this is fine, but we do need a public place to track > all the work that needs to be done. Lets reference that place on the bug. > Could be Trello or an etherpad, or whatever you want, it's up to you. > I'm creating the google spreadsheet [4] to track the status of the fluentd logging from pike to master. > > The plan I have in mind is something like: > * Make an initial picture of what the fluentd/log status is (from pike > upwards). > * Fix all misconfigured services. (designate,...) > * Add the non-configured services. (manila,...) > * Add an automated check to find a possible unconfigured/misconfigured > problem. > > Any comments, doubts or questions are welcome > > Cheers, > Juan > > [1] https://review.openstack.org/594836 > [2] https://review.openstack.org/594838 > [3] https://review.openstack.org/594840 > > [4] https://docs.google.com/spreadsheets/d/1Keh_hBYGb92rXV3VN44oPR6eGnX9LnTpcadWfMO8dZs/edit?usp=sharing > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Juan Badia Payno Software Engineer Red Hat EMEA ENG Openstack Infrastructure -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichihara.hirofumi at gmail.com Mon Sep 3 08:52:58 2018 From: ichihara.hirofumi at gmail.com (Hirofumi Ichihara) Date: Mon, 3 Sep 2018 17:52:58 +0900 Subject: [openstack-dev] [neutron] Bug deputy report week August 27 - September 2 Message-ID: Hi, I was the bug deputy for the week of August 27 - September 2. There is no Ctirical bug. High: https://bugs.launchpad.net/neutron/+bug/1789878 https://bugs.launchpad.net/neutron/+bug/1789846 https://bugs.launchpad.net/neutron/+bug/1789579 https://bugs.launchpad.net/neutron/+bug/1789434 https://bugs.launchpad.net/neutron/+bug/1789403 Medium: https://bugs.launchpad.net/neutron/+bug/1790143 https://bugs.launchpad.net/neutron/+bug/1789499 New: https://bugs.launchpad.net/neutron/+bug/1790084 This is needed to triage by l3-dvr-backlog lieutenants. https://bugs.launchpad.net/neutron/+bug/1790038 This is needed to triage by l3-dvr-backlog lieutenants. https://bugs.launchpad.net/neutron/+bug/1789334 This is needed to triage by troubleshooting lieutenants or Osprofier forks. Incomplete: https://bugs.launchpad.net/neutron/+bug/1790023 https://bugs.launchpad.net/neutron/+bug/1789870 https://bugs.launchpad.net/neutron/+bug/1789844 Wishlist: https://bugs.launchpad.net/neutron/+bug/1789378 RFE: https://bugs.launchpad.net/neutron/+bug/1789592 https://bugs.launchpad.net/neutron/+bug/1789391 Thanks, Hirofumi -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Mon Sep 3 09:21:11 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 03 Sep 2018 10:21:11 +0100 Subject: [openstack-dev] Nominating Chris Dent for placement-core In-Reply-To: References: Message-ID: <4dd044466b01a77b329e85c1108f26403547388b.camel@redhat.com> On Fri, 2018-08-31 at 10:45 -0500, Eric Fried wrote: > The openstack/placement project [1] and its core team [2] have been > established in gerrit. > > I hereby nominate Chris Dent for membership in the placement-core team. > He has been instrumental in the design, implementation, and stewardship > of the placement API since its inception and has shown clear and > consistent leadership. > > As we are effectively bootstrapping placement-core at this time, it > would seem appropriate to consider +1/-1 responses from heavy placement > contributors as well as existing cores (currently nova-core). > > [1] https://review.openstack.org/#/admin/projects/openstack/placement > [2] https://review.openstack.org/#/admin/groups/1936,members +1 From honjo.rikimaru at po.ntt-tx.co.jp Mon Sep 3 09:27:55 2018 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Mon, 3 Sep 2018 18:27:55 +0900 Subject: [openstack-dev] [nova-lxd]Feature support matrix of nova-lxd In-Reply-To: References: <084af1cf-7d31-6b5e-1bef-4fe1cc87d2ea@po.ntt-tx.co.jp> Message-ID: <728d6945-d101-0565-27a2-454a8ff2ae44@po.ntt-tx.co.jp> Hi James, Thank you for agreeing. I begin to write the document. Best regards, On 2018/08/31 20:03, James Page wrote: > Hi Rikimaru > > On Fri, 31 Aug 2018 at 11:28 Rikimaru Honjo > wrote: > >> Hello, >> >> I'm planning to write a feature support matrix[1] of nova-lxd and >> add it to nova-lxd repository. >> A similar document exists as todo.txt[2], but this is old. >> >> Can I write it? >> > > Yes please! > > >> If someone is writing the same document now, I'll stop writing. >> > > They are not - please go ahead - this would be a valuable contribution for > users evaluating this driver. > > Regards > > Jjames > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ ★部署名が変わりました。 NTTテクノクロス株式会社 IoTイノベーション事業部 第二ビジネスユニット(IV2BU) 本上力丸 TEL. :045-212-7539 E-mail:honjo.rikimaru at po.ntt-tx.co.jp 〒220-0012 横浜市西区みなとみらい4丁目4番5号 横浜アイマークプレイス 13階 From ekuvaja at redhat.com Mon Sep 3 10:00:10 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 3 Sep 2018 11:00:10 +0100 Subject: [openstack-dev] [glance][ptl] Canceled: Glance meeting Thu 6th and 13th of Sept Message-ID: Hi all, PTG is approaching fast and I'm pretty much offline this week for holidays so will cancel the two next meetings. We will get together over Wed to Fri in the PTG and have next Glance weekly meeting the week after. Safe travels everyone and I'm looking forward to see as many of ye as possible in Denver. Erno -jokke- Kuvaja From balazs.gibizer at ericsson.com Mon Sep 3 09:32:35 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 03 Sep 2018 11:32:35 +0200 Subject: [openstack-dev] Nominating Chris Dent for placement-core In-Reply-To: References: Message-ID: <1535967155.32321.1@smtp.office365.com> On Fri, Aug 31, 2018 at 5:45 PM, Eric Fried wrote: > The openstack/placement project [1] and its core team [2] have been > established in gerrit. > > I hereby nominate Chris Dent for membership in the placement-core > team. > He has been instrumental in the design, implementation, and > stewardship > of the placement API since its inception and has shown clear and > consistent leadership. > > As we are effectively bootstrapping placement-core at this time, it > would seem appropriate to consider +1/-1 responses from heavy > placement > contributors as well as existing cores (currently nova-core). > > [1] https://review.openstack.org/#/admin/projects/openstack/placement > [2] https://review.openstack.org/#/admin/groups/1936,members +1 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From alex.kavanagh at canonical.com Mon Sep 3 10:54:34 2018 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Mon, 3 Sep 2018 11:54:34 +0100 Subject: [openstack-dev] [charms] Deployment guide stable/rocky cut In-Reply-To: <024de94f-a194-8811-c5fa-8cfdaf367a16@gmail.com> References: <024de94f-a194-8811-c5fa-8cfdaf367a16@gmail.com> Message-ID: Hi Ed Yes, it's in hand. I've got a review up: https://review.openstack.org/#/c/598138/ but I also need to create some stable branches, etc. May take a few more days. Thanks Alex. On Fri, Aug 31, 2018 at 12:48 PM, Edward Hope-Morley wrote: > Hi Frode, I think it would be a good idea to add a link to the charm > deployment guide at the following page: > > https://docs.openstack.org/rocky/deploy/ > > - Ed > > On 17/08/18 08:47, Frode Nordahl wrote: > > Hello OpenStack charmers, > > I am writing to inform you that a `stable/rocky` branch has been cut for > the `openstack/charm-deployment-guide` repository. > > Should there be any further updates to the guide before the release the > changes will need to be landed in `master` and then back-ported to > `stable/rocky`. > > -- > Frode Nordahl > Software Engineer > Canonical Ltd. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Alex Kavanagh - Software Engineer Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.morin at orange.com Mon Sep 3 11:31:15 2018 From: thomas.morin at orange.com (Thomas Morin) Date: Mon, 3 Sep 2018 13:31:15 +0200 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][ceilometer] all requirement updates are currently blocked In-Reply-To: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> References: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> Message-ID: Hi Mathew, Matthew Thode, 2018-08-31 19:52: > The requirements project has a co-installability test for the various > projects, networking-odl being included. > > Because of the way the dependancy on ceilometer is done it is > blocking all reviews and updates to the requirements project. (also blocking reviews for networking-bgpvpn) > http://logs.openstack.org/96/594496/2/check/requirements-integration/8378cd8/job-output.txt.gz#_2018-08-31_22_54_49_357505 > > If networking-odl is not meant to be used as a library I'd recommend > it's removal from networking-bgpvpn (it's test-requirements.txt > file). Historically, the driver allowing the use of networking-bgpvpn with the ODL SDN controller was in the networking-bgpvpn project ; this is why we have this dependency (the driver using some ODL utility code found in the networking-odl project). We can work at removing this historical driver from networking-bgpvpn. Since a v2 driver (hosted in networking-odl) has been existing for a long time, we possibly can do that without waiting. Just need to think about the best way to do it. ODL team, what do you think ? In the meantime, to unbreak the CI for networking-bgpvpn, I'm pushing [1], which puts an upper bound (< 13) on the dependency on networking- odl to avoid drawing version 13 of networking-odl which introduces the requirement on ceilometer. -Thomas [1] https://review.openstack.org/#/c/599310/2/test-requirements.txt > Once that is done networking-odl can be removed from global- > requirements > and we won't be blocked anymore. > > As a side note, fungi noticed that when you branched you are still > installing ceilometer from master. Also, the ceilometer team > doesnt wish it to be used as a library either (like networking-odl > doesn't wish to be used as a library). > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mahati.chamarthy at gmail.com Mon Sep 3 13:49:24 2018 From: mahati.chamarthy at gmail.com (Mahati C) Date: Mon, 3 Sep 2018 19:19:24 +0530 Subject: [openstack-dev] Call for OpenStack Outreachy internship project proposals and funding Message-ID: Hello everyone! An update on the Outreachy program, including a request for volunteer mentors and funding. Outreachy helps people from underrepresented groups get involved in free and open source software by matching interns with established mentors in the upstream community. OpenStack is a participating organization in the Outreachy Dec 2018 to Mar 2019 internships. If you're interested to be a mentor, please publish your project ideas on this page https://www.outreachy.org/communities/cfp/openstack/submit-project/. Here is a link that helps you get acquainted with mentorship process: https://wiki.openstack.org/wiki/Outreachy/Mentors. We have funding for two interns so far. We are looking for additional sponsors to help support OpenStack applicants. The sponsorship cost is 6,500 USD per intern, which is used to provide them a stipend for the three-month program. You can learn more about sponsorship here: https://www.outreachy.org/sponsor/ . Outreachy has been one of the most important and effective diversity efforts we’ve invested in. We have had many interns turn into long term OpenStack contributors. Please help spread the word. If you are interested in becoming a mentor or sponsoring an intern, please contact me (mahati.chamarthy AT intel.com) or Sameul ( samueldmq AT gmail.com ). Thank you! Best, Mahati -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.morin at orange.com Mon Sep 3 15:16:33 2018 From: thomas.morin at orange.com (Thomas Morin) Date: Mon, 3 Sep 2018 17:16:33 +0200 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][ceilometer] all requirement updates are currently blocked In-Reply-To: References: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> Message-ID: Thomas Morin, 2018-09-03 13:31: > Matthew Thode, 2018-08-31 19:52: > > > > If networking-odl is not meant to be used as a library I'd > > recommend > > it's removal from networking-bgpvpn (it's test-requirements.txt > > file). > > We can work at removing this historical driver from networking- > bgpvpn. Since a v2 driver (hosted in networking-odl) has been > existing for a long time, we possibly can do that without waiting. > Just need to think about the best way to do it. Realizing that we've had a warning announcing deprecation and future removal for the last release [1], I've pushed [2] to remove the ODL driver from master without waiting. Please comment there as needed. -Thomas [1] https://github.com/openstack/networking-bgpvpn/commit/ffee38097709dd4091fb8709a70cf6c361ed60ee#diff-88cc53515016b9f865a830b216c8e564 [2] https://review.openstack.org/599422 > > Once that is done networking-odl can be removed from global- > > requirements > > and we won't be blocked anymore. > > > > As a side note, fungi noticed that when you branched you are still > > installing ceilometer from master. Also, the ceilometer team > > doesnt wish it to be used as a library either (like networking-odl > > doesn't wish to be used as a library). > > > > ___________________________________________________________________ > > __ > > _____ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > bs > > cribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Mon Sep 3 15:27:22 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 3 Sep 2018 16:27:22 +0100 (BST) Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> Message-ID: There's been some progress on the technical side of extracting placement to it own repo. The summary is: * https://git.openstack.org/cgit/openstack/placement exists * https://review.openstack.org/#/c/599416/ is at the top of a series of patches. That patch is passing and voting on unit and functional for py 2.7 and 3.5 and is passing pep8. More below, in the steps. On Tue, 28 Aug 2018, Chris Dent wrote: > On Mon, 27 Aug 2018, melanie witt wrote: >> 1. We copy the placement code into the openstack/placement repo and have it >> passing all of its own unit and functional tests. > > To break that down to more detail, how does this look? > (note the ALL CAPS where more than acknowledgement is requested) > > 1.1 Run the git filter-branch on a copy of nova > 1.1.1 Add missing files to the file list: > 1.1.1.1 .gitignore > 1.1.1.2 # ANYTHING ELSE? > 1.2 Push -f that thing, acknowledge to be broken, to a seed repo on github > (ed's repo should be fine) > 1.3 Do the repo creation bits described in > https://docs.openstack.org/infra/manual/creators.html > to seed openstack/placement > 1.3.1 set zuul jobs. Either to noop-jobs, or non voting basic > func and unit # INPUT DESIRED HERE > 1.4 Once the repo exists with some content, incrementally bring it to > working > 1.4.1 Update tox.ini to be placement oriented > 1.4.2 Update setup.cfg to be placement oriented > 1.4.3 Correct .stesr.conf > 1.4.4 Move base of placement to "right" place > 1.4.5 Move unit and functionals to right place > 1.4.6 Do automated path fixings > 1.4.7 Set up translation domain and i18n.py corectly > 1.4.8 Trim placement/conf to just the conf settings required > (api, base, database, keystone, paths, placement) > 1.4.9 Remove database files that are not relevant (the db api is > not used by placement) > 1.4.10 Fix the Database Fixture to be just one database > 1.4.11 Disable migrations that can't work (because of > dependencies on nova code, 014 and 030 are examples) > # INPUT DESIRED HERE AND ON SCHEMA MIGRATIONS IN GENERAL 030 is okay as long as nothing goes wrong. If something does it raises exceptions which would currently fail as the exceptions are not there. See below for more about exceptions. > 1.4.12 Incrementally get tests working > 1.4.13 Fix pep8 > 1.5 Make zuul pep, unit and functional voting This is where we are now at https://review.openstack.org/#/c/599416/ > 1.6 Create tools for db table sync/create It made some TODOs about this in setup.cfg, also nothing that in additional to a placement-manage we'll want a placement-status. > 1.7 Concurrently go to step 2, where the harder magic happens. > 1.8 Find and remove dead code (there will be some). Some dead code has been removed, but there will definitely be plenty more to find. > 1.9 Tune up and confirm docs > 1.10 Grep for remaining "nova" (as string and spirit) and fix > Item 1.4.12 may deserve some discussion. When I've done this the > several times before, the strategy I've used is to be test driven: > run either functional or unit tests, find and fix one of the errors > revealed, commit, move on. In the patch set that ends with the review linked above, this is pretty much what I did. Switching between a tox run of the full suite and using testtools.run to run an individual test file. >> 2. We have a stack of changes to zuul jobs that show nova working but >> deploying placement in devstack from the new repo instead of nova's repo. >> This includes the grenade job, ensuring that upgrade works. Do people have the time or info needed to break this step down into multiple steps like the '1' section above. Things I can think of: * devstack patch to deploy placement from the new repo * and use placement.conf * stripping of placement out of nova, a bit like https://review.openstack.org/#/c/596291/ , unless we leave that enitrely to step 4 * grenade tweaks (?) * more >> 3. When those pass, we merge them, effectively orphaning nova's copy of >> placement. Switch those jobs to voting. >> >> 4. Finally, we delete the orphaned code from nova (without needing to make >> any changes to non-placement-only test code -- code is truly orphaned). Some questions I have: * Presumably we can trim the placement DB migrations to just stuff that is relevant to placement and renumber accordingly? * Could we also make it so we only run the migrations if we are not in a fresh install? In a fresh install we ought to be able to skip the migrations entirely and create the tables by reflection with the class models [1]. * I had another but I forgot. [1] I did something similar to placedock for when starting from scratch: https://github.com/cdent/placedock/blob/b5ca753a0d97e0d9a324e196349e3a19eb62668b/sync.py#L68-L73 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From s at cassiba.com Mon Sep 3 18:27:25 2018 From: s at cassiba.com (Samuel Cassiba) Date: Mon, 3 Sep 2018 11:27:25 -0700 Subject: [openstack-dev] [chef] fog-openstack 0.2.0 breakage In-Reply-To: References: Message-ID: On Fri, Aug 31, 2018 at 8:59 AM, Samuel Cassiba wrote: > Ohai! > > fog-openstack 0.2.0 was recently released, which had less than optimal > effects on Chef OpenStack due to the client cookbook's lack of version > pinning on the gem. > Currently, the client cookbook is pinned to <0.2.0 going back to Ocata. Supermarket is updated as well. Due to the fallout generated, 0.2.x will be allowed where ChefDK introduces it, but 0.2.1 should be usable if you want to give it a go. Best, scas From mmagr at redhat.com Mon Sep 3 19:16:01 2018 From: mmagr at redhat.com (Martin Magr) Date: Mon, 3 Sep 2018 21:16:01 +0200 Subject: [openstack-dev] [tripleo-ansible] Future plans Message-ID: Gretings, since I did not find any blueprint regarding proper usage of tripleo-ansible, I would like to ask how exactly we plan to use tripleo-ansible project, what should be the proper structure of roles/playbooks, etc. Given the discussion in [1] it is the best place for backup and restore playbooks and I'd like to start preparing patches for B&R. Currently the development being done in [2], but I hope that is only temporary location. Thanks in advance for answers, Martin [1] https://review.openstack.org/#/c/582453/ [2] https://github.com/centos-opstools/os-backup-ansible -- Martin Mágr Senior Software Engineer Red Hat Czech -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Tue Sep 4 00:37:35 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Tue, 4 Sep 2018 10:37:35 +1000 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: Message-ID: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> On 30/08/18 13:56, Edison Xiang wrote: > Hi Ed Leafe, > > Thanks your reply. > Open API defines a standard interface description for REST APIs. > Open API 3.0 can make a description(schema) for current OpenStack REST > API. > It will not change current OpenStack API. > I am not a GraphQL expert. I look up something about GraphQL. > In my understanding, GraphQL will get current OpenAPI together and > provide another APIs based on Relay, Not sure what you mean here, could you please develop? > and Open API is used to describe REST APIs and GraphQL is used to > describe Relay APIs. There is no such thing as "Relay APIs". GraphQL povides a de-facto API Schema and Relay provides extensions on top to facilitate re-fetching, paging and more. GraphQL and OpenAPI have a different feature scope and both have pros and cons. GraphQL is delivering API without using REST verbs as all requests are undone using POST and its data. Beyond that what would be great (and it will ultimately come) is to have both of them working together. The idea of the GraphQL Proof of Concept is see what it can bring and at what cost such as effort and trade-offs. And to compare this against the effort to adapt OpenStack APIs to use Open API. BTW what's the status of Open API 3.0 in regards of Microversion? Regards, Gilles > > Best Regards, > Edison Xiang > > On Wed, Aug 29, 2018 at 9:33 PM Ed Leafe > wrote: > > On Aug 29, 2018, at 1:36 AM, Edison Xiang > wrote: > > > > As we know, Open API 3.0 was released on July, 2017, it is about > one year ago. > > Open API 3.0 support some new features like anyof, oneof and > allof than Open API 2.0(Swagger 2.0). > > Now OpenStack projects do not support Open API. > > Also I found some old emails in the Mail List about supporting > Open API 2.0 in OpenStack. > > There is currently an effort by some developers to investigate the > possibility of using GraphQL with OpenStack APIs. What would Open > API 3.0 provide that GraphQL would not? I’m asking because I don’t > know enough about Open API to compare them. > > > -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken1ohmichi at gmail.com Tue Sep 4 03:40:51 2018 From: ken1ohmichi at gmail.com (Kenichi Omichi) Date: Mon, 3 Sep 2018 20:40:51 -0700 Subject: [openstack-dev] Nominating Chris Dent for placement-core In-Reply-To: References: Message-ID: +1 2018年8月31日(金) 8:45 Eric Fried : > The openstack/placement project [1] and its core team [2] have been > established in gerrit. > > I hereby nominate Chris Dent for membership in the placement-core team. > He has been instrumental in the design, implementation, and stewardship > of the placement API since its inception and has shown clear and > consistent leadership. > > As we are effectively bootstrapping placement-core at this time, it > would seem appropriate to consider +1/-1 responses from heavy placement > contributors as well as existing cores (currently nova-core). > > [1] https://review.openstack.org/#/admin/projects/openstack/placement > [2] https://review.openstack.org/#/admin/groups/1936,members > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Tue Sep 4 05:36:43 2018 From: gkotton at vmware.com (Gary Kotton) Date: Tue, 4 Sep 2018 05:36:43 +0000 Subject: [openstack-dev] [neutron] Bug deputy report week August 27 - September 2 In-Reply-To: References: Message-ID: <9738BEB3-46B8-4BCE-96E9-9526909A3CFE@vmware.com> Thanks for the update. I have taken the baton from the 2nd. Thankfully last few days have been quite. A luta continua From: Hirofumi Ichihara Reply-To: OpenStack List Date: Monday, September 3, 2018 at 11:54 AM To: OpenStack List Subject: [openstack-dev] [neutron] Bug deputy report week August 27 - September 2 Hi, I was the bug deputy for the week of August 27 - September 2. There is no Ctirical bug. High: https://bugs.launchpad.net/neutron/+bug/1789878 https://bugs.launchpad.net/neutron/+bug/1789846 https://bugs.launchpad.net/neutron/+bug/1789579 https://bugs.launchpad.net/neutron/+bug/1789434 https://bugs.launchpad.net/neutron/+bug/1789403 Medium: https://bugs.launchpad.net/neutron/+bug/1790143 https://bugs.launchpad.net/neutron/+bug/1789499 New: https://bugs.launchpad.net/neutron/+bug/1790084 This is needed to triage by l3-dvr-backlog lieutenants. https://bugs.launchpad.net/neutron/+bug/1790038 This is needed to triage by l3-dvr-backlog lieutenants. https://bugs.launchpad.net/neutron/+bug/1789334 This is needed to triage by troubleshooting lieutenants or Osprofier forks. Incomplete: https://bugs.launchpad.net/neutron/+bug/1790023 https://bugs.launchpad.net/neutron/+bug/1789870 https://bugs.launchpad.net/neutron/+bug/1789844 Wishlist: https://bugs.launchpad.net/neutron/+bug/1789378 RFE: https://bugs.launchpad.net/neutron/+bug/1789592 https://bugs.launchpad.net/neutron/+bug/1789391 Thanks, Hirofumi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jankihc91 at gmail.com Tue Sep 4 06:13:08 2018 From: jankihc91 at gmail.com (Janki Chhatbar) Date: Tue, 4 Sep 2018 11:43:08 +0530 Subject: [openstack-dev] [Tripleo] Automating role generation Message-ID: Hi I am looking to automate role file generation in TripleO. The idea is basically for an operator to create a simple yaml file (operator.yaml, say) listing services that are needed and then TripleO to generate Controller.yaml enabling only those services that are mentioned. For example: operator.yaml services: Glance OpenDaylight Neutron ovs agent Then TripleO should 1. Fail because ODL and OVS agent are either-or services 2. After operator.yaml is modified to remove Neutron ovs agent, it should generate Controller.yaml with below content ServicesDefault: - OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GlanceRegistry - OS::TripleO::Services::OpenDaylightApi - OS::TripleO::Services::OpenDaylightOvs Currently, operator has to manually edit the role file (specially when deployed with ODL) and I have seen many instances of failing deployment due to variations of OVS, OVN and ODL services enabled when they are actually exclusive. I am willing to spend some cycle on this. What I ask is some clearity on its feasibility and any other ideas to make this idea into a feature. -- Thanking you Janki Chhatbar OpenStack | Docker | SDN simplyexplainedblog.wordpress.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Tue Sep 4 07:01:53 2018 From: soulxu at gmail.com (Alex Xu) Date: Tue, 4 Sep 2018 15:01:53 +0800 Subject: [openstack-dev] Nominating Chris Dent for placement-core In-Reply-To: References: Message-ID: +1 Eric Fried 于2018年8月31日周五 下午11:45写道: > The openstack/placement project [1] and its core team [2] have been > established in gerrit. > > I hereby nominate Chris Dent for membership in the placement-core team. > He has been instrumental in the design, implementation, and stewardship > of the placement API since its inception and has shown clear and > consistent leadership. > > As we are effectively bootstrapping placement-core at this time, it > would seem appropriate to consider +1/-1 responses from heavy placement > contributors as well as existing cores (currently nova-core). > > [1] https://review.openstack.org/#/admin/projects/openstack/placement > [2] https://review.openstack.org/#/admin/groups/1936,members > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Sep 4 08:10:50 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 4 Sep 2018 10:10:50 +0200 Subject: [openstack-dev] better name for placement (was:Nominating Chris Dent for placement-core) In-Reply-To: References: Message-ID: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> On 08/31/2018 05:45 PM, Eric Fried wrote: > The openstack/placement project [1] and its core team [2] have been > established in gerrit. > > I hereby nominate Chris Dent for membership in the placement-core team. > He has been instrumental in the design, implementation, and stewardship > of the placement API since its inception and has shown clear and > consistent leadership. > > As we are effectively bootstrapping placement-core at this time, it > would seem appropriate to consider +1/-1 responses from heavy placement > contributors as well as existing cores (currently nova-core). > > [1] https://review.openstack.org/#/admin/projects/openstack/placement > [2] https://review.openstack.org/#/admin/groups/1936,members Just a nit-pick... It's a shame we call it just placement. It could have been something like: foo: OpenStack placement Just like we have: nova: OpenStack compute No? Is it too late? Cheers, Thomas Goirand (zigo) From thierry at openstack.org Tue Sep 4 08:12:47 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 4 Sep 2018 10:12:47 +0200 Subject: [openstack-dev] [ptg] ptgbot HOWTO Message-ID: <12e26b51-a418-0df6-c1be-cc577252aa23@openstack.org> Hi everyone, In a few days some of us will meet in Denver for the 4th OpenStack PTG. The event is made of several 'tracks' (organized around a specific team/group or a specific theme). Topics of discussions are loosely scheduled in those tracks, based on the needs of the attendance. This allows to maximize attendee productivity, but the downside is that it can make the event a bit confusing to navigate. To mitigate that issue, we are using an IRC bot to expose what's happening currently at the event at the following page: http://ptg.openstack.org/ptg.html It is therefore useful to have a volunteer in each room who makes use of the PTG bot to communicate what's happening. This is done by joining the #openstack-ptg IRC channel on Freenode and voicing commands to the bot. How to keep attendees informed of what's being discussed in your room --------------------------------------------------------------------- To indicate what's currently being discussed, you will use the track name hashtag (found in the "Scheduled tracks" section on the above page), with the 'now' command: #TRACK now Example: #swift now brainstorming improvements to the ring You can also mention other track names to make sure to get people attention when the topic is transverse: #ops-meetup now discussing #cinder pain points There can only be one 'now' entry for a given track at a time. To indicate what will be discussed next, you can enter one or more 'next' commands: #TRACK next Example: #api-sig next at 2pm we'll be discussing pagination woes Note that in order to keep content current, entering a new 'now' command for a track will erase any 'next' entry for that track. Finally, if you want to clear all 'now' and 'next' entries for your track, you can issue the 'clean' command: #TRACK clean Example: #ironic clean How to book reservable rooms ---------------------------- Like at every PTG, in Denver we will have additional reservable space for extra un-scheduled discussions. In addition, some of the smaller teams do not have any pre-scheduled space, and will solely be relying on this feature to book the time that makes the most sense for them. Those teams are Chef OpenStack (#chef), LOCI (#loci), OpenStackClient (#osc), Puppet OpenStack (#puppet), Release Management (#relmgt), Requirements (#requirements), and Designate (#designate). The PTG bot page shows which track is allocated to which room, as well as available reservable space, with a slot code (room name - time slot) that you can use to issue a 'book' command to the PTG bot: #TRACK book Example: #relmgt book Ballroom C-TueA2 Any track can book additional space and time using this system. All slots are 1h45-long. If your topic of discussion does not fall into an existing track, it is easy to add a track on the fly. Just ask PTG bot admins (ttx, diablo_rojo, infra...) to create a track for you (which they can do by getting op rights and issuing a ~add command). For more information on the bot commands, please see: https://git.openstack.org/cgit/openstack/ptgbot/tree/README.rst -- Thierry Carrez (ttx) From smonderer at vasonanetworks.com Tue Sep 4 08:31:46 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Tue, 4 Sep 2018 11:31:46 +0300 Subject: [openstack-dev] [tripleo] using multiple roles Message-ID: Hi, Due to many different HW in our environment we have multiple roles. I would like to place each role definition if a different file. Is it possible to refer to all the roles from roles_data.yaml to all the different files instead of having a long roles_data.yaml file? Regards, Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Sep 4 08:32:20 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 4 Sep 2018 09:32:20 +0100 (BST) Subject: [openstack-dev] better name for placement (was:Nominating Chris Dent for placement-core) In-Reply-To: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> Message-ID: On Tue, 4 Sep 2018, Thomas Goirand wrote: > Just a nit-pick... It's a shame we call it just placement. It could have > been something like: > > foo: OpenStack placement > > Just like we have: > > nova: OpenStack compute > > No? Is it too late? There was some discussion about this on one of the extraction-related etherpads [1] and the gist is that while it would be possible to change it, at this point "placement" is the name people use and are used to so there would have to be a very good reason to change it. All the docs and code talk about "placement", and python package names are already placement. It used to be the case that the service-oriented projects would have a project name different from their service-type because that was cool/fun [2] and it allowed for the possibility that there could be another project which provided the same service-type. That hasn't really come to pass and now that we are on the far side of the hype curve, doesn't really make much sense in terms of focusing energy. My feeling is that there is already a lot of identity associated with the term "placement" and changing it would be too disruptive. Also, I hope that it will operate as a constraint on feature creep. But if we were to change it, I vote for "katabatic", as a noun, even though it is an adjective. [1] https://etherpad.openstack.org/p/placement-extract-stein-copy That was a copy of the original, which stopped working, but now that one has stopped working too. I'm going to attempt to reconstruct it today from copies that people. [2] For certain values of... -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From 18800173600 at 163.com Tue Sep 4 08:41:25 2018 From: 18800173600 at 163.com (zhangwenqing) Date: Tue, 4 Sep 2018 16:41:25 +0800 (CST) Subject: [openstack-dev] Dose anyone have use Vitrage to build a mature project for RCA or any other purpose? Message-ID: <66838c8a.dc89.165a3be5985.Coremail.18800173600@163.com> I want to use Vitrage for my AIOps project, but i can't get any relative information, and I think this is not a mature project.Does anyone have relative experience?Would you please give me some advice? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Tue Sep 4 08:48:07 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Tue, 4 Sep 2018 10:48:07 +0200 Subject: [openstack-dev] [Tripleo] Automating role generation In-Reply-To: References: Message-ID: <265867de-601f-6498-dc7f-4b50bf03904d@redhat.com> On 4.9.2018 08:13, Janki Chhatbar wrote: > Hi > > I am looking to automate role file generation in TripleO. The idea is > basically for an operator to create a simple yaml file (operator.yaml, say) > listing services that are needed and then TripleO to generate > Controller.yaml enabling only those services that are mentioned. > > For example: > operator.yaml > services: > Glance > OpenDaylight > Neutron ovs agent I'm not sure it's worth introducing a new file format as such, if the purpose is essentially to expand e.g. "Glance" into "OS::TripleO::Services::GlanceApi" and "OS::TripleO::Services::GlanceRegistry"? It would be another layer of indirection (additional mental work for the operator who wants to understand how things work), while the layer doesn't make too much difference in preparation of the role. At least that's my subjective view. > > Then TripleO should > 1. Fail because ODL and OVS agent are either-or services +1 i think having something like this would be useful. > 2. After operator.yaml is modified to remove Neutron ovs agent, it should > generate Controller.yaml with below content > > ServicesDefault: > - OS::TripleO::Services::GlanceApi > - OS::TripleO::Services::GlanceRegistry > - OS::TripleO::Services::OpenDaylightApi > - OS::TripleO::Services::OpenDaylightOvs > > Currently, operator has to manually edit the role file (specially when > deployed with ODL) and I have seen many instances of failing deployment due > to variations of OVS, OVN and ODL services enabled when they are actually > exclusive. Having validations on the service list would be helpful IMO, e.g. "these services must not be in one deployment together", "these services must not be in one role together", "these services must be together", "we recommend this service to be in every role" (i'm thinking TripleOPackages, Ntp, ...) etc. But as mentioned above, i think it would be better if we worked directly with the "OS::TripleO::Services..." values rather than a new layer of proxy-values. Additional random related thoughts: * Operator should still be able to disobey what the validation suggests, if they decide so. * Would be nice to have the info about particular services (e.g what can't be together) specified declaratively somewhere (TripleO's favorite thing in the world -- YAML?). * We could start with just one type of validation, e.g. the mutual exclusivity rule for ODL vs. OVS, but would be nice to have the solution easily expandable for new rule types. Thanks for looking into this :) Jirka > > I am willing to spend some cycle on this. What I ask is some clearity on > its feasibility and any other ideas to make this idea into a feature. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From balazs.gibizer at ericsson.com Tue Sep 4 08:50:30 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 04 Sep 2018 10:50:30 +0200 Subject: [openstack-dev] [nova]Notification subteam meeting cancelled Message-ID: <1536051030.23106.2@smtp.office365.com> Hi, This week's and next week's notification subteam meeting has been cancelled. See you in Denver. Cheers, gibi From shardy at redhat.com Tue Sep 4 09:26:15 2018 From: shardy at redhat.com (Steven Hardy) Date: Tue, 4 Sep 2018 10:26:15 +0100 Subject: [openstack-dev] [Tripleo] Automating role generation In-Reply-To: <265867de-601f-6498-dc7f-4b50bf03904d@redhat.com> References: <265867de-601f-6498-dc7f-4b50bf03904d@redhat.com> Message-ID: On Tue, Sep 4, 2018 at 9:48 AM, Jiří Stránský wrote: > On 4.9.2018 08:13, Janki Chhatbar wrote: >> >> Hi >> >> I am looking to automate role file generation in TripleO. The idea is >> basically for an operator to create a simple yaml file (operator.yaml, >> say) >> listing services that are needed and then TripleO to generate >> Controller.yaml enabling only those services that are mentioned. >> >> For example: >> operator.yaml >> services: >> Glance >> OpenDaylight >> Neutron ovs agent > > > I'm not sure it's worth introducing a new file format as such, if the > purpose is essentially to expand e.g. "Glance" into > "OS::TripleO::Services::GlanceApi" and > "OS::TripleO::Services::GlanceRegistry"? It would be another layer of > indirection (additional mental work for the operator who wants to understand > how things work), while the layer doesn't make too much difference in > preparation of the role. At least that's my subjective view. > >> >> Then TripleO should >> 1. Fail because ODL and OVS agent are either-or services > > > +1 i think having something like this would be useful. > >> 2. After operator.yaml is modified to remove Neutron ovs agent, it should >> generate Controller.yaml with below content >> >> ServicesDefault: >> - OS::TripleO::Services::GlanceApi >> - OS::TripleO::Services::GlanceRegistry >> - OS::TripleO::Services::OpenDaylightApi >> - OS::TripleO::Services::OpenDaylightOvs >> >> Currently, operator has to manually edit the role file (specially when >> deployed with ODL) and I have seen many instances of failing deployment >> due >> to variations of OVS, OVN and ODL services enabled when they are actually >> exclusive. > > > Having validations on the service list would be helpful IMO, e.g. "these > services must not be in one deployment together", "these services must not > be in one role together", "these services must be together", "we recommend > this service to be in every role" (i'm thinking TripleOPackages, Ntp, ...) > etc. But as mentioned above, i think it would be better if we worked > directly with the "OS::TripleO::Services..." values rather than a new layer > of proxy-values. > > Additional random related thoughts: > > * Operator should still be able to disobey what the validation suggests, if > they decide so. > > * Would be nice to have the info about particular services (e.g what can't > be together) specified declaratively somewhere (TripleO's favorite thing in > the world -- YAML?). > > * We could start with just one type of validation, e.g. the mutual > exclusivity rule for ODL vs. OVS, but would be nice to have the solution > easily expandable for new rule types. This is similar to how the UI uses the capabilities-map.yaml, so perhaps we can use that as the place to describe service dependencies and conflicts? https://github.com/openstack/tripleo-heat-templates/blob/master/capabilities-map.yaml Currently this isn't used at all for the CLI, but I can imagine some kind of wizard interface being useful, e.g you could say enable "Glance" group and it'd automatically pull in all glance dependencies? Another thing to mention is this doesn't necessarily have to generate a new role (although it could), the *Services parameter for existing roles can be overridden, so it might be simpler to generate an environment file instead. Steve From ifatafekn at gmail.com Tue Sep 4 09:29:43 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Tue, 4 Sep 2018 12:29:43 +0300 Subject: [openstack-dev] Dose anyone have use Vitrage to build a mature project for RCA or any other purpose? In-Reply-To: <66838c8a.dc89.165a3be5985.Coremail.18800173600@163.com> References: <66838c8a.dc89.165a3be5985.Coremail.18800173600@163.com> Message-ID: On Tue, Sep 4, 2018 at 11:41 AM zhangwenqing <18800173600 at 163.com> wrote: > I want to use Vitrage for my AIOps project, but i can't get any relative > information, and I think this is not a mature project.Does anyone have > relative experience?Would you please give me some advice? > Hi, Vitrage is used in production environments as part of Nokia CloudBand Infrastructure Software and CloudBand Network Director products. The project exists for three years now, and it is mature. I'll be happy to help if you have other questions. Br, Ifat -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbultel at redhat.com Tue Sep 4 10:20:37 2018 From: mbultel at redhat.com (mathieu bultel) Date: Tue, 4 Sep 2018 12:20:37 +0200 Subject: [openstack-dev] [tripleo] quickstart for humans In-Reply-To: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> References: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> Message-ID: <262f467f-9028-9ea8-7af5-ee30e8238a45@redhat.com> Hi On 08/30/2018 04:28 PM, Honza Pokorny wrote: > Hello! > > Over the last few months, it seems that tripleo-quickstart has evolved > into a CI tool. It's primarily used by computers, and not humans. > tripleo-quickstart is a helpful set of ansible playbooks, and a > collection of feature sets. However, it's become less useful for > setting up development environments by humans. For example, devmode.sh > was recently deprecated without a user-friendly replacement. Moreover, > during some informal irc conversations in #oooq, some developers even > mentioned the plan to merge tripleo-quickstart and tripleo-ci. > > I think it would be beneficial to create a set of defaults for > tripleo-quickstart that can be used to spin up new environments; a set > of defaults for humans. This can either be a well-maintained script in > tripleo-quickstart itself, or a brand new project, e.g. > tripleo-quickstart-humans. The number of settings, knobs, and flags > should be kept to a minimum. > > This would accomplish two goals: > > 1. It would bring uniformity to the team. Each environment is > installed the same way. When something goes wrong, we can > eliminate differences in setup when debugging. This should save a > lot of time. > > 2. Quicker and more reliable environment setup. If the set of defaults > is used by many people, it should container fewer bugs because more > people using something should translate into more bug reports, and > more bug fixes. > > These thoughts are coming from the context of tripleo-ui development. I > need an environment in order to develop, but I don't necessarily always > care about how it's installed. I want something that works for most > scenarios. > > What do you think? Does this make sense? Does something like this > already exist? I'm agree with the fact that quickstart has turned into a CI tool, more than a human tool. I still use quickstart to deploy and work on TripleO but I feel a bit lost when I have to grep into the config/ dirctory to see which featureset match to my needs and, if not, try to tweak the config and pray that the tweaked options will work as expected. I also discovered the ci reproducer script recently. The script probably need to be a bit more robust, but it's a real gain when you have to reproduce CI environments. I think a first less effort for now would be to bring a set of Quickstart commands and "human readable config files" to improve the situation. > Thanks for listening! > > Honza > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dtantsur at redhat.com Tue Sep 4 11:01:07 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 4 Sep 2018 13:01:07 +0200 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: Message-ID: <5bf25256-2160-0335-fc37-32a2fb060e8c@redhat.com> Hi, On 08/29/2018 08:36 AM, Edison Xiang wrote: > Hi team, > > As we know, Open API 3.0 was released on July, 2017, it is about one year ago. > Open API 3.0 support some new features like anyof, oneof and allof than Open API > 2.0(Swagger 2.0). > Now OpenStack projects do not support Open API. > Also I found some old emails in the Mail List about supporting Open API 2.0 in > OpenStack. > > Some limitations are mentioned in the Mail List for OpenStack API: > 1. The POST */action APIs. >     These APIs are exist in lots of projects like nova, cinder. >     These APIs have the same URI but the responses will be different when the > request is different. > 2. Micro versions. >     These are controller via headers, which are sometimes used to describe > behavioral changes in an API, not just request/response schema changes. > > About the first limitation, we can find the solution in the Open API 3.0. > The example [2] shows that we can define different request/response in the same > URI by anyof feature in Open API 3.0. This is a good first step, but if I get it right it does not specify which response corresponds to which request. > > About the micro versions problem, I think it is not a limitation related a > special API Standard. > We can list all micro versions API schema files in one directory like nova/V2, I don't think this approach will scale if you plan to generate anything based on these schemes. If you generate client code from separate schema files, you'll essentially end up with dozens of major versions. > or we can list the schema changes between micro versions as tempest project did [3]. ++ > > Based on Open API 3.0, it can bring lots of benefits for OpenStack Community and > does not impact the current features the Community has. > For example, we can automatically generate API documents, different language > Clients(SDK) maybe for different micro versions, From my experience with writing an SDK, I don't believe generating a complete SDK from API schemes is useful. Maybe generating low-level protocol code to base an SDK on, but even that may be easier to do by hand. Dmitry > and generate cloud tool adapters for OpenStack, like ansible module, terraform > providers and so on. > Also we can make an API UI to provide an online and visible API search, API > Calling for every OpenStack API. > 3rd party developers can also do some self-defined development. > > [1] https://github.com/OAI/OpenAPI-Specification > [2] > https://github.com/edisonxiang/OpenAPI-Specification/blob/master/examples/v3.0/petstore.yaml#L94-L109 > [3] > https://github.com/openstack/tempest/tree/master/tempest/lib/api_schema/response/compute > > Best Regards, > Edison Xiang > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dangtrinhnt at gmail.com Tue Sep 4 11:50:13 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 4 Sep 2018 20:50:13 +0900 Subject: [openstack-dev] [TC][Searchlight] Searchlight project missing from the OpenStack website Message-ID: Dear TC, I'm not sure if I missed something but Searchlight is not listed in the Software section of the OpenStack website [1]. Is it because Searchlight has missed the Rocky cycle? Bests, [1] https://www.openstack.org/software/project-navigator/openstack-components#operations-services *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Tue Sep 4 12:21:51 2018 From: alee at redhat.com (Ade Lee) Date: Tue, 04 Sep 2018 08:21:51 -0400 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: <1535671314-sup-5525@lrrr.local> References: <1535671314-sup-5525@lrrr.local> Message-ID: <1536063711.11937.33.camel@redhat.com> Barbican is ready. Thanks, Ade On Thu, 2018-08-30 at 19:24 -0400, Doug Hellmann wrote: > Below is the list of project teams that have not yet started > migrating > their zuul configuration. If you're ready to go, please respond to > this > email to let us know so we can start proposing patches. > > Doug > > > adjutant | 3 repos | > > barbican | 5 repos | > > Chef OpenStack | 19 repos | > > cinder | 6 repos | > > cloudkitty | 5 repos | > > I18n | 2 repos | > > Infrastructure | 158 repos | > > loci | 1 repos | > > nova | 6 repos | > > OpenStack Charms | 80 repos | > > Packaging-rpm | 4 repos | > > Puppet OpenStack | 47 repos | > > Quality Assurance | 22 repos | > > Telemetry | 8 repos | > > trove | 5 repos | > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Sep 4 12:30:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 04 Sep 2018 08:30:40 -0400 Subject: [openstack-dev] [election][tc] announcing candidacy Message-ID: <1536064192-sup-380@lrrr.local> I am announcing my candidacy for a position on the OpenStack Technical Committee. I started contributing to OpenStack in 2012, not long after joining Dreamhost, and I am currently employed by Red Hat to work on OpenStack with a focus on long-term project concerns. I have served on the Technical Committee for the last five years, including as Chair during the last term. I have also been PTL of the Oslo and Release Management teams at different points in the past. I have spent most of my time in all of those roles over the last few years making incremental improvements in our ability to collaborate while building OpenStack, including initiatives such as leading the current community goal to run CI jobs under Python 3 by default [1]; coordinating last year's documentation migration; and updating our dependency management system to make it easier for projects to run stand-alone. During my time serving as TC Chair, I have tried to update the way the group works with the community. We started by performing a "health check" for all of our project teams [2], as a way to spot potential issues teams are experiencing that we can help with, and to encourage TC members to learn more about teams they may not interact with on a daily basis. We will be reviewing the results at the PTG [3], and continuing to refine that process. I have also had a few opportunities this year to share our governance structure with other communities [4][5]. It's exciting to be able to talk to them about how the ideals and principles that hold our community together can also apply to their projects. The OpenStack community continues to be the most welcoming group I have interacted with in more than 25 years of contributing to open source projects. I look forward to another opportunity to serve the project through the Technical Committee over the coming year. Thank you, Doug Candidacy submission: https://review.openstack.org/599582 Review history: https://review.openstack.org/#/q/reviewer:2472,n,z Commit history: https://review.openstack.org/#/q/owner:2472,n,z Foundation Profile: http://www.openstack.org/community/members/profile/359 Freenode: dhellmann Website: https://doughellmann.com [1] https://governance.openstack.org/tc/goals/stein/python3-first.html [2] https://wiki.openstack.org/wiki/OpenStack_health_tracker [3] https://etherpad.openstack.org/p/tc-stein-ptg [4] https://doughellmann.com/blog/2018/08/21/planting-acorns/ [5] https://www.python.org/dev/peps/pep-8002/ From ifatafekn at gmail.com Tue Sep 4 13:04:23 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Tue, 4 Sep 2018 16:04:23 +0300 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari][vitrage] Possible deprecation of Nova's legacy notification interface In-Reply-To: <1535466670.23583.3@smtp.office365.com> References: <1533807698.26377.7@smtp.office365.com> <1535466670.23583.3@smtp.office365.com> Message-ID: Hi, Vitrage also uses the Nova legacy notifications. Unfortunately I will not attend the PTG, but I added the relevant information in the etherpad. Thanks, Ifat On Tue, Aug 28, 2018 at 5:31 PM Balázs Gibizer wrote: > Thanks for all the responses. I collected them on the nova ptg > discussion etherpad [1] (L186 at the moment). The nova team will talk > about deprecation of the legacy interface on Friday on the PTG. If you > want participate in the discussion but you are not planning to sit in > the nova room whole day then let me know and I will try to ping you > over IRC when we about to start the item. > > Cheers, > gibi > > [1] https://etherpad.openstack.org/p/nova-ptg-stein -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Tue Sep 4 13:34:33 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 4 Sep 2018 08:34:33 -0500 Subject: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests Message-ID: <11be89ad-a59a-1fe6-5c7b-badb4a06e643@fried.cc> Folks- The other day, I posted an experimental patch [1] with an effectively empty ComputeDriver (just enough to make n-cpu actually start) to see how much of our CI would pass. The theory being that any tests that still pass are tests that don't touch our compute driver, and are therefore not useful to run in our CI environment. Because anything that doesn't touch our code should already be well covered by generic dsvm-tempest CIs. The results [2] show that 707 tests still pass. So I'm wondering whether there might be a way to mark tests as being "compute driver-specific" such that we could switch off all the other ones [3] via a one-line conf setting. Because surely this has potential to save a lot of CI resource not just for us but for other driver vendors, in tree and out. Thanks, efried [1] https://review.openstack.org/#/c/599066/ [2] http://184.172.12.213/66/599066/5/check/nova-powervm-out-of-tree-pvm/a1b42d5/powervm_os_ci.html.gz [3] I get that there's still value in running all those tests. But it could be done like once every 10 or 50 or 100 runs instead of every time. From fungi at yuggoth.org Tue Sep 4 13:37:41 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 4 Sep 2018 13:37:41 +0000 Subject: [openstack-dev] better name for placement (was:Nominating Chris Dent for placement-core) In-Reply-To: References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> Message-ID: <20180904133741.eetizhx4rksarmg7@yuggoth.org> On 2018-09-04 09:32:20 +0100 (+0100), Chris Dent wrote: [...] > it allowed for the possibility that there could be another project > which provided the same service-type. That hasn't really come to > pass [...] It still might make sense to attempt to look at this issue from outside the limited scope of the OpenStack community. Is the expectation that the project when packaged (on PyPI, in Linux distributions and so on) will just be referred to as "placement" with no further context? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From aschultz at redhat.com Tue Sep 4 13:51:30 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 4 Sep 2018 07:51:30 -0600 Subject: [openstack-dev] [tripleo] using multiple roles In-Reply-To: References: Message-ID: On Tue, Sep 4, 2018 at 2:31 AM, Samuel Monderer wrote: > Hi, > > Due to many different HW in our environment we have multiple roles. > I would like to place each role definition if a different file. > Is it possible to refer to all the roles from roles_data.yaml to all the > different files instead of having a long roles_data.yaml file? > So you can have them in different files for general management, however in order to actually consume them they need to be in a roles_data.yaml file for the deployment. We offer a few cli commands to help with this management. The 'openstack overcloud roles generate' command can be used to generate a roles_data.yaml for your deployment. You can store the individual roles in a folder and use the 'openstack overcloud roles list --roles-path /your/folder' to view the available roles. This workflow is described in the roles README[0] Thanks, -Alex [0] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/roles/README.rst > Regards, > Samuel > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From balazs.gibizer at ericsson.com Tue Sep 4 14:02:29 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 04 Sep 2018 16:02:29 +0200 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari][vitrage]Possible deprecation of Nova's legacy notification interface In-Reply-To: References: <1533807698.26377.7@smtp.office365.com> <1535466670.23583.3@smtp.office365.com> Message-ID: <1536069749.27194.0@smtp.office365.com> On Tue, Sep 4, 2018 at 3:04 PM, Ifat Afek wrote: > Hi, > > Vitrage also uses the Nova legacy notifications. > Unfortunately I will not attend the PTG, but I added the relevant > information in the etherpad. Thanks for the pad update. Cheers, gibi > > Thanks, > Ifat > > On Tue, Aug 28, 2018 at 5:31 PM Balázs Gibizer > wrote: >> Thanks for all the responses. I collected them on the nova ptg >> discussion etherpad [1] (L186 at the moment). The nova team will talk >> about deprecation of the legacy interface on Friday on the PTG. If >> you >> want participate in the discussion but you are not planning to sit in >> the nova room whole day then let me know and I will try to ping you >> over IRC when we about to start the item. >> >> Cheers, >> gibi >> >> [1] https://etherpad.openstack.org/p/nova-ptg-stein > From e0ne at e0ne.info Tue Sep 4 14:03:13 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Tue, 4 Sep 2018 17:03:13 +0300 Subject: [openstack-dev] [horizon][stable] Removing Inactive Cores Message-ID: Hi team, Since we're at the beginning of Stein release cycle, I think it's a good time to do some cleanup in our core teams. First of all, I would like to say a big Thank you to everybody who contributed as Core Reviewer to Horizon. Unfortunately, some people became inactive as reviewers during the last few cycles, that's why I propose to remove them from Horizon Core and Horizon Stable Core teams. I'm pretty sure that you could be added to the teams again once your contribution will be active again. Changes to horizon-core team: - Kenji Ishii - Tatiana Ovchinnikova - Ying Zuo Changes to horizon-stable-maint team: - Doug Fish - Lin Hua Cheng - Matthias Runge - Richard Jones - Rob Cresswell - Thai Tran - Ying Zuo Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Sep 4 14:05:28 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 4 Sep 2018 10:05:28 -0400 Subject: [openstack-dev] better name for placement In-Reply-To: <20180904133741.eetizhx4rksarmg7@yuggoth.org> References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> Message-ID: On 09/04/2018 09:37 AM, Jeremy Stanley wrote: > On 2018-09-04 09:32:20 +0100 (+0100), Chris Dent wrote: > [...] >> it allowed for the possibility that there could be another project >> which provided the same service-type. That hasn't really come to >> pass > [...] > > It still might make sense to attempt to look at this issue from > outside the limited scope of the OpenStack community. Is the > expectation that the project when packaged (on PyPI, in Linux > distributions and so on) will just be referred to as "placement" > with no further context? I don't see any reason why the package name needs to be identical to the repo name. Is there a reason we couldn't have openstack-placement be the package name? Best, -jay From smonderer at vasonanetworks.com Tue Sep 4 14:15:30 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Tue, 4 Sep 2018 17:15:30 +0300 Subject: [openstack-dev] [tripleo] using multiple roles In-Reply-To: References: Message-ID: Is it possible to have the roles_data.yaml file generated when running "openstack overcloud deploy"?? On Tue, Sep 4, 2018 at 4:52 PM Alex Schultz wrote: > On Tue, Sep 4, 2018 at 2:31 AM, Samuel Monderer > wrote: > > Hi, > > > > Due to many different HW in our environment we have multiple roles. > > I would like to place each role definition if a different file. > > Is it possible to refer to all the roles from roles_data.yaml to all the > > different files instead of having a long roles_data.yaml file? > > > > So you can have them in different files for general management, > however in order to actually consume them they need to be in a > roles_data.yaml file for the deployment. We offer a few cli commands > to help with this management. The 'openstack overcloud roles > generate' command can be used to generate a roles_data.yaml for your > deployment. You can store the individual roles in a folder and use the > 'openstack overcloud roles list --roles-path /your/folder' to view the > available roles. This workflow is described in the roles README[0] > > Thanks, > -Alex > > [0] > http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/roles/README.rst > > > Regards, > > Samuel > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Tue Sep 4 14:18:30 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 4 Sep 2018 08:18:30 -0600 Subject: [openstack-dev] [tripleo] using multiple roles In-Reply-To: References: Message-ID: On Tue, Sep 4, 2018 at 8:15 AM, Samuel Monderer wrote: > Is it possible to have the roles_data.yaml file generated when running > "openstack overcloud deploy"?? > Not at this time. That is something we'd like to get to, but is not currently prioritized. Thanks, -Alex > On Tue, Sep 4, 2018 at 4:52 PM Alex Schultz wrote: >> >> On Tue, Sep 4, 2018 at 2:31 AM, Samuel Monderer >> wrote: >> > Hi, >> > >> > Due to many different HW in our environment we have multiple roles. >> > I would like to place each role definition if a different file. >> > Is it possible to refer to all the roles from roles_data.yaml to all the >> > different files instead of having a long roles_data.yaml file? >> > >> >> So you can have them in different files for general management, >> however in order to actually consume them they need to be in a >> roles_data.yaml file for the deployment. We offer a few cli commands >> to help with this management. The 'openstack overcloud roles >> generate' command can be used to generate a roles_data.yaml for your >> deployment. You can store the individual roles in a folder and use the >> 'openstack overcloud roles list --roles-path /your/folder' to view the >> available roles. This workflow is described in the roles README[0] >> >> Thanks, >> -Alex >> >> [0] >> http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/roles/README.rst >> >> > Regards, >> > Samuel >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From cdent+os at anticdent.org Tue Sep 4 14:32:12 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 4 Sep 2018 15:32:12 +0100 (BST) Subject: [openstack-dev] better name for placement In-Reply-To: References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> Message-ID: On Tue, 4 Sep 2018, Jay Pipes wrote: > Is there a reason we couldn't have openstack-placement be the package name? I would hope we'd be able to do that, and probably should do that. 'openstack-placement' seems a find pypi package name for a think from which you do 'import placement' to do some openstack stuff, yeah? Last I checked the concept of the package name is sort of put off until we have passing tests, but we're nearly there on that. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From tobias.rydberg at citynetwork.eu Tue Sep 4 14:36:01 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Tue, 4 Sep 2018 16:36:01 +0200 Subject: [openstack-dev] [publiccloud-wg] Meeting tomorrow for Public Cloud WG Message-ID: <97dd2292-cea9-29e0-4d0e-b33ac8a5bc76@citynetwork.eu> Hi folks, Time for a new meeting for the Public Cloud WG. Agenda draft can be found at https://etherpad.openstack.org/p/publiccloud-wg, feel free to add items to that list. See you all tomorrow at 0700 UTC - IRC channel #openstack-publiccloud Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From luka.peschke at objectif-libre.com Tue Sep 4 14:52:29 2018 From: luka.peschke at objectif-libre.com (Luka Peschke) Date: Tue, 04 Sep 2018 16:52:29 +0200 Subject: [openstack-dev] =?utf-8?b?W2dvYWxzXVtweXRob24zXVthZGp1dGFudF1b?= =?utf-8?b?YmFyYmljYW5dW2NoZWZdW2NpbmRlcl1bY2xvdWRraXR0eV1baTE4bl1baW5m?= =?utf-8?b?cmFdW2xvY2ldW25vdmFdW2NoYXJtc11bcnBtXVtwdXBwZXRdW3FhXVt0ZWxl?= =?utf-8?q?metry=5D=5Btrove=5D_join_the_bandwagon!?= In-Reply-To: <1535720627-sup-1016@lrrr.local> References: <1535671314-sup-5525@lrrr.local> <5a0ea1129dc9ecaf64f52668255ea4b6@objectif-libre.com> <1535720627-sup-1016@lrrr.local> Message-ID: <115d51aa0c88e7f16a152f3a772e3563@objectif-libre.com> All changes on cloudkitty projects have been merged, so we're ready for https://review.openstack.org/#/c/598929. Thank you again! -- Luka Peschke Développeur +33 (0) 5 82 95 65 36 5 rue du Moulin Bayard - 31000 Toulouse www.objectif-libre.com Le 2018-08-31 15:04, Doug Hellmann a écrit : > Excerpts from Christophe Sauthier's message of 2018-08-31 11:20:33 > +0200: > >> We are ready to start on the cloudkitty's team ! >> >> Christophe > > Here are the patches: > > +-------------------------------------------------+-------------------------------------+-------------------------------------+---------------+ > | Subject | Repo > | URL | Branch > | > +-------------------------------------------------+-------------------------------------+-------------------------------------+---------------+ > | remove job settings for cloudkitty repositories | > openstack-infra/project-config | > https://review.openstack.org/598929 | master | > | import zuul job settings from project-config | > openstack/cloudkitty | > https://review.openstack.org/598884 | master | > | switch documentation job to new PTI | > openstack/cloudkitty | > https://review.openstack.org/598885 | master | > | add python 3.6 unit test job | > openstack/cloudkitty | > https://review.openstack.org/598886 | master | > | import zuul job settings from project-config | > openstack/cloudkitty | > https://review.openstack.org/598900 | stable/ocata | > | import zuul job settings from project-config | > openstack/cloudkitty | > https://review.openstack.org/598906 | stable/pike | > | import zuul job settings from project-config | > openstack/cloudkitty | > https://review.openstack.org/598912 | stable/queens | > | import zuul job settings from project-config | > openstack/cloudkitty | > https://review.openstack.org/598918 | stable/rocky | > | import zuul job settings from project-config | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598888 | master | > | switch documentation job to new PTI | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598889 | master | > | add python 3.6 unit test job | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598890 | master | > | import zuul job settings from project-config | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598902 | stable/ocata | > | import zuul job settings from project-config | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598908 | stable/pike | > | import zuul job settings from project-config | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598914 | stable/queens | > | import zuul job settings from project-config | > openstack/cloudkitty-dashboard | > https://review.openstack.org/598920 | stable/rocky | > | import zuul job settings from project-config | > openstack/cloudkitty-specs | > https://review.openstack.org/598893 | master | > | import zuul job settings from project-config | > openstack/cloudkitty-tempest-plugin | > https://review.openstack.org/598895 | master | > | import zuul job settings from project-config | > openstack/python-cloudkittyclient | > https://review.openstack.org/598897 | master | > | add python 3.6 unit test job | > openstack/python-cloudkittyclient | > https://review.openstack.org/598898 | master | > | import zuul job settings from project-config | > openstack/python-cloudkittyclient | > https://review.openstack.org/598904 | stable/ocata | > | import zuul job settings from project-config | > openstack/python-cloudkittyclient | > https://review.openstack.org/598910 | stable/pike | > | import zuul job settings from project-config | > openstack/python-cloudkittyclient | > https://review.openstack.org/598917 | stable/queens | > | import zuul job settings from project-config | > openstack/python-cloudkittyclient | > https://review.openstack.org/598923 | stable/rocky | > +-------------------------------------------------+-------------------------------------+-------------------------------------+---------------+ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From smonderer at vasonanetworks.com Tue Sep 4 15:28:24 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Tue, 4 Sep 2018 18:28:24 +0300 Subject: [openstack-dev] [tripleo] VFs not configured in SR-IOV role Message-ID: Hi, Attached is the used to deploy an overcloud with SR-IOV role. The deployment completed successfully but the VFs aren't configured on the host. Can anyone have a look at what I missed. Thanks Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: templates.zip Type: application/zip Size: 26577 bytes Desc: not available URL: From doug at doughellmann.com Tue Sep 4 15:42:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 04 Sep 2018 11:42:34 -0400 Subject: [openstack-dev] better name for placement In-Reply-To: References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> Message-ID: <1536075712-sup-2014@lrrr.local> Excerpts from Jay Pipes's message of 2018-09-04 10:05:28 -0400: > On 09/04/2018 09:37 AM, Jeremy Stanley wrote: > > On 2018-09-04 09:32:20 +0100 (+0100), Chris Dent wrote: > > [...] > >> it allowed for the possibility that there could be another project > >> which provided the same service-type. That hasn't really come to > >> pass > > [...] > > > > It still might make sense to attempt to look at this issue from > > outside the limited scope of the OpenStack community. Is the > > expectation that the project when packaged (on PyPI, in Linux > > distributions and so on) will just be referred to as "placement" > > with no further context? > > I don't see any reason why the package name needs to be identical to the > repo name. > > Is there a reason we couldn't have openstack-placement be the package name? That would work fine. The package name is set in setup.cfg and we have several examples where the value there and repo name don't match. Doug > > Best, > -jay > From doug at doughellmann.com Tue Sep 4 15:44:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 04 Sep 2018 11:44:41 -0400 Subject: [openstack-dev] better name for placement In-Reply-To: References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> Message-ID: <1536075775-sup-7652@lrrr.local> Excerpts from Chris Dent's message of 2018-09-04 15:32:12 +0100: > On Tue, 4 Sep 2018, Jay Pipes wrote: > > > Is there a reason we couldn't have openstack-placement be the package name? > > I would hope we'd be able to do that, and probably should do that. > 'openstack-placement' seems a find pypi package name for a think > from which you do 'import placement' to do some openstack stuff, > yeah? That's still a pretty generic name for the top-level import, but I think the only real risk is that the placement service couldn't be installed at the same time as another package owned by someone else that used that top-level name. I'm not sure how much of a risk that really is. > > Last I checked the concept of the package name is sort of put off > until we have passing tests, but we're nearly there on that. > From ksambor at redhat.com Tue Sep 4 15:53:14 2018 From: ksambor at redhat.com (Kamil Sambor) Date: Tue, 4 Sep 2018 17:53:14 +0200 Subject: [openstack-dev] [tripleo] Posibilities to aggregate/merge configs across templates Message-ID: Hi all, I want to start discussion on: how to solve issue with merging environment values in TripleO. Description: In TripleO we experience some issues related to setting parameters in heat templates. First, it isn't possible to set some params as ultimate source of truth (disallow to overwrite param in other heat templates). Second it isn't possible to merge values from different templates [0][1]. Both features are implemented in heat and can be easly used in templates.[2][3] This doesn't work in TripleO because we overwrite all values in template in python client instead of aggregating them etc. orsimply let heat do the job .[4][5] Solution: Example solutions are: we can fix how python tripleo client works with env and templates and enable heat features or we can write some puppet code that will work similar to firewall code [6] and will support aggregate and merge values that we point out. Both solutions have pros and cons but IMHO solution which give heat to do job is preferable. But solution with merging give us possibilities to have full control on merging of environments. Problems: Only few as a start: With both solutions we will have the same problem, porting new patches which will use this functionalities to older version of rhel. Also upgrades can be really problematic to new version. Also changes which will enable heat feature will totally change how templates work and we will need to change all templates and change default behavior (which is merge params) to override behavior and also add posibilities to run temporaly old behavior. On the end, I prepared two patchsets with two PoC in progress. First one with merging env in tripleo client but with using heat merging functionality: https://review.openstack.org/#/c/599322/ . And second where we ignore merget env and move all files and add them into deployment plan enviroments. https://review.openstack.org/#/c/599559/ What do you think about each of solution?Which solution should be used in TripleO? Best, Kamil Sambor [0] https://bugs.launchpad.net/tripleo/+bug/1716391 [1] https://bugs.launchpad.net/heat/+bug/1635409 [2] https://docs.openstack.org/heat/pike/template_guide/environment.html#restrict-update-or-replace-of-a-given-resource [3] https://docs.openstack.org/heat/pike/template_guide/environment.html#environment-merging [4] https://github.com/openstack/python-tripleoclient/blob/master/tripleoclient/utils.py#L1019 [5] https://github.com/openstack/python-heatclient/blob/f73c2a4177377b710a02577feea38560b00a24bf/heatclient/common/template_utils.py#L191 [6] https://github.com/openstack/puppet-tripleo/tree/master/manifests/firewall From doug at doughellmann.com Tue Sep 4 15:55:07 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 04 Sep 2018 11:55:07 -0400 Subject: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon! In-Reply-To: <1536063711.11937.33.camel@redhat.com> References: <1535671314-sup-5525@lrrr.local> <1536063711.11937.33.camel@redhat.com> Message-ID: <1536076479-sup-9911@lrrr.local> Excerpts from Ade Lee's message of 2018-09-04 08:21:51 -0400: > Barbican is ready. > > Thanks, > Ade Here you go: +-----------------------------------------------+-----------------------------------+-------------------------------------+---------------+ | Subject | Repo | URL | Branch | +-----------------------------------------------+-----------------------------------+-------------------------------------+---------------+ | remove job settings for barbican repositories | openstack-infra/project-config | https://review.openstack.org/599663 | master | | import zuul job settings from project-config | openstack/barbican | https://review.openstack.org/599644 | master | | switch documentation job to new PTI | openstack/barbican | https://review.openstack.org/599645 | master | | add python 3.6 unit test job | openstack/barbican | https://review.openstack.org/599646 | master | | import zuul job settings from project-config | openstack/barbican | https://review.openstack.org/599655 | stable/ocata | | import zuul job settings from project-config | openstack/barbican | https://review.openstack.org/599657 | stable/pike | | import zuul job settings from project-config | openstack/barbican | https://review.openstack.org/599659 | stable/queens | | import zuul job settings from project-config | openstack/barbican | https://review.openstack.org/599661 | stable/rocky | | import zuul job settings from project-config | openstack/barbican-specs | https://review.openstack.org/599647 | master | | import zuul job settings from project-config | openstack/barbican-tempest-plugin | https://review.openstack.org/599648 | master | | import zuul job settings from project-config | openstack/castellan-ui | https://review.openstack.org/599649 | master | | switch documentation job to new PTI | openstack/castellan-ui | https://review.openstack.org/599650 | master | | import zuul job settings from project-config | openstack/python-barbicanclient | https://review.openstack.org/599652 | master | | switch documentation job to new PTI | openstack/python-barbicanclient | https://review.openstack.org/599653 | master | | add python 3.6 unit test job | openstack/python-barbicanclient | https://review.openstack.org/599654 | master | | import zuul job settings from project-config | openstack/python-barbicanclient | https://review.openstack.org/599656 | stable/ocata | | import zuul job settings from project-config | openstack/python-barbicanclient | https://review.openstack.org/599658 | stable/pike | | import zuul job settings from project-config | openstack/python-barbicanclient | https://review.openstack.org/599660 | stable/queens | | import zuul job settings from project-config | openstack/python-barbicanclient | https://review.openstack.org/599662 | stable/rocky | +-----------------------------------------------+-----------------------------------+-------------------------------------+---------------+ From fungi at yuggoth.org Tue Sep 4 15:57:01 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 4 Sep 2018 15:57:01 +0000 Subject: [openstack-dev] better name for placement In-Reply-To: <1536075775-sup-7652@lrrr.local> References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> Message-ID: <20180904155701.7dypj5wj7tnxlze5@yuggoth.org> On 2018-09-04 11:44:41 -0400 (-0400), Doug Hellmann wrote: > Excerpts from Chris Dent's message of 2018-09-04 15:32:12 +0100: [...] > > I would hope we'd be able to do that, and probably should do that. > > 'openstack-placement' seems a find pypi package name for a think > > from which you do 'import placement' to do some openstack stuff, > > yeah? > > That's still a pretty generic name for the top-level import, but I think > the only real risk is that the placement service couldn't be installed > at the same time as another package owned by someone else that used that > top-level name. I'm not sure how much of a risk that really is. [...] Well, it goes further than just the local system. For example, if there was another project unrelated to OpenStack which also had a module named "placement" in the default import path, then Debian wouldn't be able to carry packages for both projects without modifying. At least one would need the module renamed or would need to put it in a private path and then any consumers would need to be adjusted to suit. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jaypipes at gmail.com Tue Sep 4 16:08:41 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 4 Sep 2018 12:08:41 -0400 Subject: [openstack-dev] better name for placement In-Reply-To: <1536075775-sup-7652@lrrr.local> References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> Message-ID: On 09/04/2018 11:44 AM, Doug Hellmann wrote: > Excerpts from Chris Dent's message of 2018-09-04 15:32:12 +0100: >> On Tue, 4 Sep 2018, Jay Pipes wrote: >> >>> Is there a reason we couldn't have openstack-placement be the package name? >> >> I would hope we'd be able to do that, and probably should do that. >> 'openstack-placement' seems a find pypi package name for a think >> from which you do 'import placement' to do some openstack stuff, >> yeah? > > That's still a pretty generic name for the top-level import, but I think > the only real risk is that the placement service couldn't be installed > at the same time as another package owned by someone else that used that > top-level name. I'm not sure how much of a risk that really is. You mean if there was another Python package that used the package name "placement"? The alternative would be to make the top-level package something like os_placement instead? Best, -jay From rico.lin.guanyu at gmail.com Tue Sep 4 16:14:42 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 5 Sep 2018 00:14:42 +0800 Subject: [openstack-dev] [heat] Heat PTG Message-ID: Dear all As PTG is near. It's time to settle down the PTG format for Heat. Here is the *PTG etherpad*: https://etherpad.openstack.org/p/2018-Denver-PTG-Heat This time we will run with *physical + online for all sessions*. The online link for sessions will post on etherpad before the session begins. *We will only use Wednesday and Thursday, and our discussion will try to be Asia friendly*, which means any sessions require the entire team effort needs to happen in the morning. Also* feel free to add topic suggestion* if you would like to raise any discussion. Otherwise, I see you at PTG(physical/online). I'm *welcome any User/Ops feedbacks* as well, so feel free to leave any message for us. -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Sep 4 16:17:31 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 04 Sep 2018 12:17:31 -0400 Subject: [openstack-dev] better name for placement In-Reply-To: References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> Message-ID: <1536077826-sup-9892@lrrr.local> Excerpts from Jay Pipes's message of 2018-09-04 12:08:41 -0400: > On 09/04/2018 11:44 AM, Doug Hellmann wrote: > > Excerpts from Chris Dent's message of 2018-09-04 15:32:12 +0100: > >> On Tue, 4 Sep 2018, Jay Pipes wrote: > >> > >>> Is there a reason we couldn't have openstack-placement be the package name? > >> > >> I would hope we'd be able to do that, and probably should do that. > >> 'openstack-placement' seems a find pypi package name for a think > >> from which you do 'import placement' to do some openstack stuff, > >> yeah? > > > > That's still a pretty generic name for the top-level import, but I think > > the only real risk is that the placement service couldn't be installed > > at the same time as another package owned by someone else that used that > > top-level name. I'm not sure how much of a risk that really is. > > You mean if there was another Python package that used the package name > "placement"? > > The alternative would be to make the top-level package something like > os_placement instead? Yes. Doug From jaypipes at gmail.com Tue Sep 4 16:25:57 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 4 Sep 2018 12:25:57 -0400 Subject: [openstack-dev] better name for placement In-Reply-To: <1536077826-sup-9892@lrrr.local> References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> <1536077826-sup-9892@lrrr.local> Message-ID: <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> On 09/04/2018 12:17 PM, Doug Hellmann wrote: > Excerpts from Jay Pipes's message of 2018-09-04 12:08:41 -0400: >> On 09/04/2018 11:44 AM, Doug Hellmann wrote: >>> Excerpts from Chris Dent's message of 2018-09-04 15:32:12 +0100: >>>> On Tue, 4 Sep 2018, Jay Pipes wrote: >>>> >>>>> Is there a reason we couldn't have openstack-placement be the package name? >>>> >>>> I would hope we'd be able to do that, and probably should do that. >>>> 'openstack-placement' seems a find pypi package name for a think >>>> from which you do 'import placement' to do some openstack stuff, >>>> yeah? >>> >>> That's still a pretty generic name for the top-level import, but I think >>> the only real risk is that the placement service couldn't be installed >>> at the same time as another package owned by someone else that used that >>> top-level name. I'm not sure how much of a risk that really is. >> >> You mean if there was another Python package that used the package name >> "placement"? >> >> The alternative would be to make the top-level package something like >> os_placement instead? Either one works for me. Though I'm pretty sure that it isn't necessary. The reason it isn't necessary is because the stuff in the top-level placement package isn't meant to be imported by anything at all. It's the placement server code. Nothing is going to be adding openstack-placement into its requirements.txt file or doing: from placement import blah If some part of the server repo is meant to be imported into some other system, say nova, then it will be pulled into a separate lib, ala ironiclib or neutronlib. Best, -jay From cdent+os at anticdent.org Tue Sep 4 16:33:55 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 4 Sep 2018 17:33:55 +0100 (BST) Subject: [openstack-dev] better name for placement In-Reply-To: <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> <1536077826-sup-9892@lrrr.local> <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> Message-ID: On Tue, 4 Sep 2018, Jay Pipes wrote: > Either one works for me. Though I'm pretty sure that it isn't necessary. The > reason it isn't necessary is because the stuff in the top-level placement > package isn't meant to be imported by anything at all. It's the placement > server code. Yes. > If some part of the server repo is meant to be imported into some other > system, say nova, then it will be pulled into a separate lib, ala ironiclib > or neutronlib. Also yes. At this stage I _really_ don't want to go through the trouble of doing a second rename: we're in the process of finishing a rename now. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From no-reply at openstack.org Tue Sep 4 16:43:44 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Tue, 04 Sep 2018 16:43:44 -0000 Subject: [openstack-dev] kolla-ansible 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for kolla-ansible for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/kolla-ansible/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/kolla-ansible/log/?h=stable/rocky Release notes for kolla-ansible can be found at: https://docs.openstack.org/releasenotes/kolla-ansible/ From balazs.gibizer at ericsson.com Tue Sep 4 16:59:36 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 04 Sep 2018 18:59:36 +0200 Subject: [openstack-dev] better name for placement In-Reply-To: <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> <1536077826-sup-9892@lrrr.local> <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> Message-ID: <1536080376.27194.4@smtp.office365.com> On Tue, Sep 4, 2018 at 6:25 PM, Jay Pipes wrote: > On 09/04/2018 12:17 PM, Doug Hellmann wrote: >> Excerpts from Jay Pipes's message of 2018-09-04 12:08:41 -0400: >>> On 09/04/2018 11:44 AM, Doug Hellmann wrote: >>>> Excerpts from Chris Dent's message of 2018-09-04 15:32:12 +0100: >>>>> On Tue, 4 Sep 2018, Jay Pipes wrote: >>>>> >>>>>> Is there a reason we couldn't have openstack-placement be the >>>>>> package name? >>>>> >>>>> I would hope we'd be able to do that, and probably should do that. >>>>> 'openstack-placement' seems a find pypi package name for a think >>>>> from which you do 'import placement' to do some openstack stuff, >>>>> yeah? >>>> >>>> That's still a pretty generic name for the top-level import, but I >>>> think >>>> the only real risk is that the placement service couldn't be >>>> installed >>>> at the same time as another package owned by someone else that >>>> used that >>>> top-level name. I'm not sure how much of a risk that really is. >>> >>> You mean if there was another Python package that used the package >>> name >>> "placement"? >>> >>> The alternative would be to make the top-level package something >>> like >>> os_placement instead? > > Either one works for me. Though I'm pretty sure that it isn't > necessary. The reason it isn't necessary is because the stuff in the > top-level placement package isn't meant to be imported by anything at > all. It's the placement server code. What about placement direct and the effort to allow cinder to import placement instead of running it as a separate service? Cheers, gibi > > Nothing is going to be adding openstack-placement into its > requirements.txt file or doing: > > from placement import blah > > If some part of the server repo is meant to be imported into some > other system, say nova, then it will be pulled into a separate lib, > ala ironiclib or neutronlib. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Tue Sep 4 17:01:54 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 4 Sep 2018 13:01:54 -0400 Subject: [openstack-dev] better name for placement In-Reply-To: <1536080376.27194.4@smtp.office365.com> References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> <1536077826-sup-9892@lrrr.local> <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> <1536080376.27194.4@smtp.office365.com> Message-ID: On 09/04/2018 12:59 PM, Balázs Gibizer wrote: > On Tue, Sep 4, 2018 at 6:25 PM, Jay Pipes wrote: >> On 09/04/2018 12:17 PM, Doug Hellmann wrote: >>> Excerpts from Jay Pipes's message of 2018-09-04 12:08:41 -0400: >>>> On 09/04/2018 11:44 AM, Doug Hellmann wrote: >>>>> Excerpts from Chris Dent's message of 2018-09-04 15:32:12 +0100: >>>>>> On Tue, 4 Sep 2018, Jay Pipes wrote: >>>>>> >>>>>>> Is there a reason we couldn't have openstack-placement be the >>>>>>> package name? >>>>>> >>>>>> I would hope we'd be able to do that, and probably should do that. >>>>>> 'openstack-placement' seems a find pypi package name for a think >>>>>> from which you do 'import placement' to do some openstack stuff, >>>>>> yeah? >>>>> >>>>> That's still a pretty generic name for the top-level import, but I >>>>> think >>>>> the only real risk is that the placement service couldn't be installed >>>>> at the same time as another package owned by someone else that used >>>>> that >>>>> top-level name. I'm not sure how much of a risk that really is. >>>> >>>> You mean if there was another Python package that used the package name >>>> "placement"? >>>> >>>> The alternative would be to make the top-level package something like >>>> os_placement instead? >> >> Either one works for me. Though I'm pretty sure that it isn't >> necessary. The reason it isn't necessary is because the stuff in the >> top-level placement package isn't meant to be imported by anything at >> all. It's the placement server code. > > What about placement direct and the effort to allow cinder to import > placement instead of running it as a separate service? I don't know what placement direct is. Placement wasn't designed to be imported as a module. It was designed to be a (micro-)service with a REST API for interfacing. Best, -jay From balazs.gibizer at ericsson.com Tue Sep 4 17:17:38 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 04 Sep 2018 19:17:38 +0200 Subject: [openstack-dev] better name for placement In-Reply-To: References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> <1536077826-sup-9892@lrrr.local> <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> <1536080376.27194.4@smtp.office365.com> Message-ID: <1536081458.27194.5@smtp.office365.com> On Tue, Sep 4, 2018 at 7:01 PM, Jay Pipes wrote: > On 09/04/2018 12:59 PM, Balázs Gibizer wrote: >> On Tue, Sep 4, 2018 at 6:25 PM, Jay Pipes wrote: >>> On 09/04/2018 12:17 PM, Doug Hellmann wrote: >>>> Excerpts from Jay Pipes's message of 2018-09-04 12:08:41 -0400: >>>>> On 09/04/2018 11:44 AM, Doug Hellmann wrote: >>>>>> Excerpts from Chris Dent's message of 2018-09-04 15:32:12 +0100: >>>>>>> On Tue, 4 Sep 2018, Jay Pipes wrote: >>>>>>> >>>>>>>> Is there a reason we couldn't have openstack-placement be the >>>>>>>> package name? >>>>>>> >>>>>>> I would hope we'd be able to do that, and probably should do >>>>>>> that. >>>>>>> 'openstack-placement' seems a find pypi package name for a think >>>>>>> from which you do 'import placement' to do some openstack stuff, >>>>>>> yeah? >>>>>> >>>>>> That's still a pretty generic name for the top-level import, but >>>>>> I think >>>>>> the only real risk is that the placement service couldn't be >>>>>> installed >>>>>> at the same time as another package owned by someone else that >>>>>> used that >>>>>> top-level name. I'm not sure how much of a risk that really is. >>>>> >>>>> You mean if there was another Python package that used the >>>>> package name >>>>> "placement"? >>>>> >>>>> The alternative would be to make the top-level package something >>>>> like >>>>> os_placement instead? >>> >>> Either one works for me. Though I'm pretty sure that it isn't >>> necessary. The reason it isn't necessary is because the stuff in >>> the top-level placement package isn't meant to be imported by >>> anything at all. It's the placement server code. >> >> What about placement direct and the effort to allow cinder to import >> placement instead of running it as a separate service? > > I don't know what placement direct is. Placement wasn't designed to > be imported as a module. It was designed to be a (micro-)service with > a REST API for interfacing. In Vancouver we talked about allowing cinder to import placement as a library. See https://etherpad.openstack.org/p/YVR-cinder-placement L47 Cheers, gibi > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aj at suse.com Tue Sep 4 17:22:57 2018 From: aj at suse.com (Andreas Jaeger) Date: Tue, 4 Sep 2018 19:22:57 +0200 Subject: [openstack-dev] Retiring openstack-infra/odsreg and openstack-infra/puppet-odsreg Message-ID: <770fe3b5-911c-ad05-d21f-25bcfa10ccf6@suse.com> Puppet-odsreg is unused nowadays and it seems that odsreg is unused as well. I'll propose changes to retire them with topic "retire-odsreg", Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From jaypipes at gmail.com Tue Sep 4 17:31:18 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 4 Sep 2018 13:31:18 -0400 Subject: [openstack-dev] better name for placement In-Reply-To: <1536081458.27194.5@smtp.office365.com> References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> <1536077826-sup-9892@lrrr.local> <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> <1536080376.27194.4@smtp.office365.com> <1536081458.27194.5@smtp.office365.com> Message-ID: On 09/04/2018 01:17 PM, Balázs Gibizer wrote: > On Tue, Sep 4, 2018 at 7:01 PM, Jay Pipes wrote: >> On 09/04/2018 12:59 PM, Balázs Gibizer wrote: >>> On Tue, Sep 4, 2018 at 6:25 PM, Jay Pipes wrote: >>>> On 09/04/2018 12:17 PM, Doug Hellmann wrote: >>>>> Excerpts from Jay Pipes's message of 2018-09-04 12:08:41 -0400: >>>>>> On 09/04/2018 11:44 AM, Doug Hellmann wrote: >>>>>>> Excerpts from Chris Dent's message of 2018-09-04 15:32:12 +0100: >>>>>>>> On Tue, 4 Sep 2018, Jay Pipes wrote: >>>>>>>> >>>>>>>>> Is there a reason we couldn't have >>>>>>>>> openstack-placement be the package name? >>>>>>>> >>>>>>>> I would hope we'd be able to do that, and probably should do that. >>>>>>>> 'openstack-placement' seems a find pypi package name for a think >>>>>>>> from which you do 'import placement' to do some openstack stuff, >>>>>>>> yeah? >>>>>>> >>>>>>> That's still a pretty generic name for the top-level >>>>>>> import, but I think >>>>>>> the only real risk is that the placement service couldn't be >>>>>>> installed >>>>>>> at the same time as another package owned by someone >>>>>>> else that used that >>>>>>> top-level name. I'm not sure how much of a risk that really is. >>>>>> >>>>>> You mean if there was another Python package that used the package >>>>>> name >>>>>> "placement"? >>>>>> >>>>>> The alternative would be to make the top-level package something like >>>>>> os_placement instead? >>>> >>>> Either one works for me. Though I'm pretty sure that it isn't >>>> necessary. The reason it isn't necessary is because the stuff >>>> in the top-level placement package isn't meant to be imported by >>>> anything at all. It's the placement server code. >>> >>> What about placement direct and the effort to allow cinder to >>> import placement instead of running it as a separate service? >> >> I don't know what placement direct is. Placement wasn't designed to be >> imported as a module. It was designed to be a (micro-)service with a >> REST API for interfacing. > > In Vancouver we talked about allowing cinder to import placement as a > library. See https://etherpad.openstack.org/p/YVR-cinder-placement L47 I wasn't in YVR, which explains why I's never heard of it. There's a number of misconceptions in the above document about the placement service that don't seem to have been addressed. I'm wondering if its worth revisiting the topic in Denver with the Cinder team or whether the Cinder team isn't interested in working with the placement service? -jay From miguel at mlavalle.com Tue Sep 4 17:36:27 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 4 Sep 2018 12:36:27 -0500 Subject: [openstack-dev] [neutron] Weekly meeting canceled on Tuesday 11th Message-ID: Dear Neutron Team, Due to the PTG in Denver, the Neutron weekly team on Tuesday 11th at 1400 UTC is canceled. We will resume our meeting on Monday September 17th at 2100 UTC Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Tue Sep 4 17:40:11 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 4 Sep 2018 12:40:11 -0500 Subject: [openstack-dev] [neutron] PTG agenda Message-ID: Dear Neutron Team, I have scheduled all the topics that were proposed for the PTG in Denver here: https://etherpad.openstack.org/p/neutron-stein-ptg. Please go to line 128 and onwards to see the detailed schedule. Please reach out to me should we need to make any changes or adjustments Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Sep 4 17:44:08 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 4 Sep 2018 18:44:08 +0100 (BST) Subject: [openstack-dev] better name for placement In-Reply-To: References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> <1536077826-sup-9892@lrrr.local> <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> <1536080376.27194.4@smtp.office365.com> <1536081458.27194.5@smtp.office365.com> Message-ID: On Tue, 4 Sep 2018, Jay Pipes wrote: > I wasn't in YVR, which explains why I's never heard of it. There's a number > of misconceptions in the above document about the placement service that > don't seem to have been addressed. I'm wondering if its worth revisiting the > topic in Denver with the Cinder team or whether the Cinder team isn't > interested in working with the placement service? It was also discussed as part of the reshaper spec and implemented for future use by a potential fast forward upgrade tool: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/reshape-provider-tree.html#direct-placement https://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/placement/direct.py I agree, talking to Cinder some more in denver about use of placement, either over HTTP or direct, whatever form, is good. But I don't think any of that should impact the naming situation. It's placement now, and placement is not really any less unique than a lot of the other words we use, the direct situation is a very special and edge case (likely in containers anyway, so naming not as much of a big deal). Changing the name, again, is painful. Please, let's not do it. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From ed at leafe.com Tue Sep 4 17:54:32 2018 From: ed at leafe.com (Ed Leafe) Date: Tue, 4 Sep 2018 12:54:32 -0500 Subject: [openstack-dev] better name for placement In-Reply-To: References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> <1536077826-sup-9892@lrrr.local> <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> <1536080376.27194.4@smtp.office365.com> <1536081458.27194.5@smtp.office365.com> Message-ID: <1CE9BB6D-E835-4B1A-BEAB-87FD67791A3B@leafe.com> On Sep 4, 2018, at 12:44 PM, Chris Dent wrote: > > Changing the name, again, is painful. Please, let's not do it. I was in favor of coming up with a project name for placement. It was discussed, and the decision was made not to do so. We have since moved forward with the outcome of that decision. Revisiting it now would be painful, as Chris notes, and do nothing to advance the progress we have been making. Let’s focus on the work in front of us, and not waste time revisiting past decisions. -- Ed Leafe From doug at doughellmann.com Tue Sep 4 17:59:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 04 Sep 2018 13:59:23 -0400 Subject: [openstack-dev] [goals][python3] week 4 update Message-ID: <1536083697-sup-1420@lrrr.local> Subject: [goal][python3] week 4 update This is the 4th week of the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). == What we learned last week == We have discovered a few repositories with tests defined in project-config so that they run on all branches, even though they are not supported on all branches. We will need project teams to help identify those cases, and edit the scripted patches to remove any jobs that are not appropriate on stable branches. If they pass, you might want to consider keeping them, but if they are failing and never worked then it's OK to remove them. A few reviewers have suggested using some YAML features that allow repeated sections to be inserted by reference, instead of copying and pasting content in different parts of the Zuul configuration. We can do this, but need to do it after the migration. See http://lists.openstack.org/pipermail/openstack-dev/2018-August/134065.html for details. A surprising (to me) number of stable branches don't pass their tests reliably. It is possible to configure Storyboard to not show all of the events (comments, changes, etc.) associated with a story by default. Doing this fixes the slow rendering problem for the tracking story, and the data is still available for normal stories by clicking on the detail tabs at the bottom of the page. Go to https://storyboard.openstack.org/#!/profile/preferences and select the "Timeline events" you want to see by default. == Ongoing and Completed Work == Here is the current status of the Zuul migration portion of the goal (not including patches to add new python 3.6 unit test jobs, change the documentation jobs, etc.). +---------------------+------+-------+-------------------------------------------------------------+--------------------+ | Team | Open | Total | Status | Champion | +---------------------+------+-------+-------------------------------------------------------------+--------------------+ | adjutant | 4 | 4 | migration in progress | Doug Hellmann | | barbican | 13 | 13 | migration in progress | Doug Hellmann | | blazar | 16 | 16 | migration in progress | Nguyen Hai | | Chef OpenStack | 0 | 1 | DONE | Doug Hellmann | | cinder | 0 | 0 | not started, 6 repos | | | cloudkitty | 0 | 17 | waiting for cleanup https://review.openstack.org/#/c/598929 | Doug Hellmann | | congress | 0 | 16 | DONE | Nguyen Hai | | cyborg | 0 | 9 | DONE | Nguyen Hai | | designate | 0 | 17 | DONE | Nguyen Hai | | Documentation | 0 | 12 | DONE | Doug Hellmann | | dragonflow | 4 | 4 | migration in progress | Nguyen Hai | | ec2-api | 2 | 7 | migration in progress | | | freezer | 17 | 28 | migration in progress | | | glance | 16 | 16 | migration in progress | Nguyen Hai | | heat | 8 | 27 | migration in progress | Doug Hellmann | | horizon | 0 | 8 | DONE | Nguyen Hai | | I18n | 0 | 0 | not started, 2 repos | Doug Hellmann | | Infrastructure | 0 | 0 | not started, 158 repos | Doug Hellmann | | InteropWG | 0 | 4 | DONE | Doug Hellmann | | ironic | 15 | 60 | migration in progress | Doug Hellmann | | karbor | 22 | 22 | migration in progress | Nguyen Hai | | keystone | 23 | 30 | migration in progress | Doug Hellmann | | kolla | 1 | 8 | migration in progress | | | kuryr | 14 | 24 | migration in progress | Doug Hellmann | | loci | 0 | 0 | not started, 1 repos | | | magnum | 7 | 17 | migration in progress | | | manila | 11 | 19 | migration in progress | Goutham Pacha Ravi | | masakari | 16 | 18 | migration in progress | Nguyen Hai | | mistral | 0 | 25 | DONE | Nguyen Hai | | monasca | 7 | 66 | migration in progress | Doug Hellmann | | murano | 1 | 25 | migration in progress | | | neutron | 57 | 74 | migration in progress | Doug Hellmann | | nova | 0 | 0 | not started, 6 repos | | | octavia | 0 | 23 | DONE | Nguyen Hai | | OpenStack Charms | 0 | 0 | not started, 80 repos | | | OpenStack-Helm | 5 | 5 | migration in progress | | | OpenStackAnsible | 36 | 270 | migration in progress | | | OpenStackClient | 1 | 16 | migration in progress | | | OpenStackSDK | 12 | 15 | migration in progress | | | oslo | 0 | 157 | DONE | Doug Hellmann | | Packaging-rpm | 3 | 3 | migration in progress | Doug Hellmann | | PowerVMStackers | 0 | 15 | DONE | Doug Hellmann | | Puppet OpenStack | 39 | 193 | migration in progress | Doug Hellmann | | qinling | 1 | 6 | migration in progress | | | Quality Assurance | 0 | 0 | not started, 22 repos | Doug Hellmann | | rally | 0 | 2 | DONE | Nguyen Hai | | Release Management | 1 | 1 | migration in progress | Doug Hellmann | | requirements | 0 | 5 | DONE | Doug Hellmann | | sahara | 0 | 27 | DONE | Doug Hellmann | | searchlight | 0 | 13 | DONE | Nguyen Hai | | senlin | 11 | 16 | migration in progress | Nguyen Hai | | SIGs | 3 | 5 | migration in progress | Doug Hellmann | | solum | 0 | 17 | DONE | Nguyen Hai | | storlets | 4 | 5 | migration in progress | | | swift | 0 | 11 | DONE | Nguyen Hai | | tacker | 8 | 16 | migration in progress | Nguyen Hai | | Technical Committee | 0 | 5 | DONE | Doug Hellmann | | Telemetry | 19 | 31 | migration in progress | Doug Hellmann | | tricircle | 3 | 9 | migration in progress | Nguyen Hai | | tripleo | 69 | 135 | migration in progress | Doug Hellmann | | trove | 0 | 0 | not started, 5 repos | | | User Committee | 4 | 5 | migration in progress | Doug Hellmann | | vitrage | 0 | 17 | DONE | Nguyen Hai | | watcher | 5 | 17 | migration in progress | Nguyen Hai | | winstackers | 6 | 11 | migration in progress | | | zaqar | 12 | 17 | migration in progress | | | zun | 0 | 13 | DONE | Nguyen Hai | | TOTAL | 496 | 1670 | 20/67 DONE | | +---------------------+------+-------+-------------------------------------------------------------+--------------------+ == Next Steps == If your team shows up in the above list as "not started" please let us know when you are ready to begin the transition. As you can see, it involves quite a few patches for some teams and it will be better to propose those early in the cycle during a relatively quiet period, rather than waiting until a time when having large batches of changes proposed together disrupts work. I have proposed the change to project-config to change all of the packaging jobs to use the new publish-to-pypi-python3 template. We should be able to have that change in place before the first milestone for Stein so that we have an opportunity to test it. http://lists.openstack.org/pipermail/openstack-dev/2018-August/134068.html == How can you help? == 1. Choose a patch that has failing tests and help fix it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) 2. Review the patches for the zuul changes. Keep in mind that some of those patches will be on the stable branches for projects. 3. Work on adding functional test jobs that run under Python 3. == How can you ask for help? == If you have any questions, please post them here to the openstack-dev list with the topic tag [python3] in the subject line. Posting questions to the mailing list will give the widest audience the chance to see the answers. We are using the #openstack-dev IRC channel for discussion as well, but I'm not sure how good our timezone coverage is so it's probably better to use the mailing list. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 From jillr at redhat.com Tue Sep 4 18:09:30 2018 From: jillr at redhat.com (Jill Rouleau) Date: Tue, 04 Sep 2018 11:09:30 -0700 Subject: [openstack-dev] [tripleo-ansible] Future plans In-Reply-To: References: Message-ID: <1536084570.5721.7.camel@redhat.com> Hi Martin,On Mon, 2018-09-03 at 21:16 +0200, Martin Magr wrote: > Gretings, > >   since I did not find any blueprint regarding proper usage of > tripleo-ansible, I would like to ask how exactly we plan to use > tripleo-ansible project, what should be the proper structure of > roles/playbooks, etc. Given the discussion in [1] it is the best place > for backup and restore playbooks and I'd like to start preparing > patches for B&R. Currently the development being done in [2], but I > hope that is only temporary location. I've added the backup task example to the ansible roles blueprint[0]. Generally you'll want to add your tasks, variables, role-specific handlers, etc to the appropriate ansible role repo (in this case ansible-role-openstack-operations) and the playbooks for using those role tasks to the tripleo-ansible project repo. TripleO-specific shared handlers/libraries/etc - things your playbooks will use that are not specific to one single role - would also go in the tripleo-ansible repo. [0] https://blueprints.launchpad.net/tripleo/+spec/ansible-tasks-to-role > > Thanks in advance for answers, > Martin > > [1] https://review.openstack.org/#/c/582453/ > [2] https://github.com/centos-opstools/os-backup-ansible > > --  > Martin Mágr > Senior Software Engineer > Red Hat Czech > ______________________________________________________________________ > ____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubsc > ribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From aschultz at redhat.com Tue Sep 4 19:03:01 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 4 Sep 2018 13:03:01 -0600 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do In-Reply-To: References: Message-ID: On Thu, Aug 9, 2018 at 2:43 PM, Mohammed Naser wrote: > Hi Alex, > > I am very much in favour of what you're bringing up. We do have > multiple projects that leverage Ansible in different ways and we all > end up doing the same thing at the end. The duplication of work is > not really beneficial for us as it takes away from our use-cases. > > I believe that there is a certain number of steps that we all share > regardless of how we deploy, some of the things that come up to me > right away are: > > - Configuring infrastructure services (i.e.: create vhosts for service > in rabbitmq, create databases for services, configure users for > rabbitmq, db, etc) > - Configuring inter-OpenStack services (i.e. keystone_authtoken > section, creating endpoints, etc and users for services) > - Configuring actual OpenStack services (i.e. > /etc//.conf file with the ability of extending > options) > - Running CI/integration on a cloud (i.e. common role that literally > gets an admin user, password and auth endpoint and creates all > resources and does CI) > > This would deduplicate a lot of work, and especially the last one, it > might be beneficial for more than Ansible-based projects, I can > imagine Puppet OpenStack leveraging this as well inside Zuul CI > (optionally)... However, I think that this something which we should > discus further for the PTG. I think that there will be a tiny bit > upfront work as we all standarize but then it's a win for all involved > communities. > > I would like to propose that deployment tools maybe sit down together > at the PTG, all share how we use Ansible to accomplish these tasks and > then perhaps we can work all together on abstracting some of these > concepts together for us to all leverage. > I'm currently trying to get a spot on Tuesday morning to further discuss some of this items. In the mean time I've started an etherpad[0] to start collecting ideas for things to discuss. At the moment I've got the tempest role collaboration and some basic ideas for best practice items that we can discuss. Feel free to add your own and I'll update the etherpad with a time slot when I get one nailed down. Thanks, -Alex [0] https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg > I'll let others chime in as well. > > Regards, > Mohammed > > On Thu, Aug 9, 2018 at 4:31 PM, Alex Schultz wrote: >> Ahoy folks, >> >> I think it's time we come up with some basic rules/patterns on where >> code lands when it comes to OpenStack related Ansible roles and as we >> convert/export things. There was a recent proposal to create an >> ansible-role-tempest[0] that would take what we use in >> tripleo-quickstart-extras[1] and separate it for re-usability by >> others. So it was asked if we could work with the openstack-ansible >> team and leverage the existing openstack-ansible-os_tempest[2]. It >> turns out we have a few more already existing roles laying around as >> well[3][4]. >> >> What I would like to propose is that we as a community come together >> to agree on specific patterns so that we can leverage the same roles >> for some of the core configuration/deployment functionality while >> still allowing for specific project specific customization. What I've >> noticed between all the project is that we have a few specific core >> pieces of functionality that needs to be handled (or skipped as it may >> be) for each service being deployed. >> >> 1) software installation >> 2) configuration management >> 3) service management >> 4) misc service actions >> >> Depending on which flavor of the deployment you're using, the content >> of each of these may be different. Just about the only thing that is >> shared between them all would be the configuration management part. >> To that, I was wondering if there would be a benefit to establishing a >> pattern within say openstack-ansible where we can disable items #1 and >> #3 but reuse #2 in projects like kolla/tripleo where we need to do >> some configuration generation. If we can't establish a similar >> pattern it'll make it harder to reuse and contribute between the >> various projects. >> >> In tripleo we've recently created a bunch of ansible-role-tripleo-* >> repositories which we were planning on moving the tripleo specific >> tasks (for upgrades, etc) to and were hoping that we might be able to >> reuse the upstream ansible roles similar to how we've previously >> leverage the puppet openstack work for configurations. So for us, it >> would be beneficial if we could maybe help align/contribute/guide the >> configuration management and maybe misc service action portions of the >> openstack-ansible roles, but be able to disable the actual software >> install/service management as that would be managed via our >> ansible-role-tripleo-* roles. >> >> Is this something that would be beneficial to further discuss at the >> PTG? Anyone have any additional suggestions/thoughts? >> >> My personal thoughts for tripleo would be that we'd have >> tripleo-ansible calls openstack-ansible- for core config but >> package/service installation disabled and calls >> ansible-role-tripleo- for tripleo specific actions such as >> opinionated packages/service configuration/upgrades. Maybe this is >> too complex? But at the same time, do we need to come up with 3 >> different ways to do this? >> >> Thanks, >> -Alex >> >> [0] https://review.openstack.org/#/c/589133/ >> [1] http://git.openstack.org/cgit/openstack/tripleo-quickstart-extras/tree/roles/validate-tempest >> [2] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest/ >> [3] http://git.openstack.org/cgit/openstack/kolla-ansible/tree/ansible/roles/tempest >> [4] http://git.openstack.org/cgit/openstack/ansible-role-tripleo-tempest >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From juliaashleykreger at gmail.com Tue Sep 4 19:16:41 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 4 Sep 2018 12:16:41 -0700 Subject: [openstack-dev] [election][tc] announcing candidacy Message-ID: Greetings Stackers! I hereby announce my candidacy for a position on the OpenStack Technical Committee. In many respects I consider myself a maverick, except reality is sometimes entirely different than my own self perception, upon reflection. I find self reflection and introspection to be powerful tools, along with passion and desire for the common good. That desire for the common good is the driving force behind my involvement in OpenStack, which I hope to see as a vibrant and thriving community for years to come. Have things changed? Yes, I think they are ever evolving. I think we can only take the logical paths that we see before us at the time. Does this mean we will make mistakes? Absolutely, but mistakes are also opportunities to learn and evolve as time goes on; which perhaps is an unspoken backbone of our community. The key is that we must not fear change but embrace it. Changing our community for the better is a process we can only take one step at a time, and we must recognize our strength is in our diversity. As we move forward, as we evolve, we need to keep in mind our goals and overall vision. In a sense, these things vary across all projects, but our central community vision and goal helps provide direction. As we continue our journey, I believe we need to lift up new contributors, incorporate new thoughts, and new ideas. Embracing change while keeping our basic course so new contributors can better find and integrate with our community as we continue forward. We need to listen and take that as feedback to better understand other perspectives, for it is not only our singular personal perspective which helps give us direction, but the community as a whole. For those who do not know me well my name is Julia Ashley Kreger. Often I can be found on IRC as TheJulia, in numerous OpenStack related channels. I have had the pleasure of serving the community this past year on the Technical Committee. I have also served the ironic community quite a bit during my time in the OpenStack community, which began during the Juno cycle. I am the current Project Team Lead for the Ironic team. I began serving in that capacity starting with the Rocky cycle. Prior, I served as the team's release liaison. You might have seen me as one of those crazy people advocating for standalone usage. Prior lives included deploying and operating complex systems, but that is enough about me. Ultimately I believe I bring a different perspective to the TC and it is for this, and my many strong beliefs and experiences, I feel I am well suited to serve the community for another year on the Technical Committee. Thank you for your consideration, Julia freenode: TheJulia Twitter: @ashinclouds https://www.openstack.org/community/members/profile/19088/julia-kreger http://stackalytics.com/?release=all&user_id=juliaashleykreger From sean.mcginnis at gmx.com Tue Sep 4 20:16:24 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 4 Sep 2018 15:16:24 -0500 Subject: [openstack-dev] [goals][python3] week 4 update In-Reply-To: <1536083697-sup-1420@lrrr.local> References: <1536083697-sup-1420@lrrr.local> Message-ID: <20180904201624.GA4900@sm-workstation> > > +---------------------+------+-------+-------------------------------------------------------------+--------------------+ > | Team | Open | Total | Status | Champion | > +---------------------+------+-------+-------------------------------------------------------------+--------------------+ > > | cinder | 0 | 0 | not started, 6 repos | | > > +---------------------+------+-------+-------------------------------------------------------------+--------------------+ > > == Next Steps == > > If your team shows up in the above list as "not started" please let > us know when you are ready to begin the transition. As you can see, > it involves quite a few patches for some teams and it will be better > to propose those early in the cycle during a relatively quiet period, > rather than waiting until a time when having large batches of changes > proposed together disrupts work. > Jay has been out lately, so I will speak for Cinder for now. We should be at a good point to proceed with the transition. Sean From cdent+os at anticdent.org Tue Sep 4 20:35:26 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 4 Sep 2018 21:35:26 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-36 Message-ID: HTML: https://anticdent.org/tc-report-18-36.html It's been a rather busy day, so this TC Report will be a quick update of some discussions that have happened in the past week. # PEP 8002 With Guido van Rossum stepping back from his role as the BDFL of Python, there's work in progress to review different methods of governance used in other communities to come up with some ideas for the future of Python. Those reviews are being gathered in PEP 8002. Doug Hellman has been helping with those conversations and asked for [input on a draft](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-28.log.html#t2018-08-28T20:40:41). There was some good conversation, especially the bits about the differences between ["direct democracy" and whatever what we do here in OpenStack](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-29.log.html#t2018-08-29T11:00:50). The result of the draft was quickly merged into [PEP 8002](https://www.python.org/dev/peps/pep-8002/). # Summit Sessions There was discussion about concerns some people experience with some [summit sessions feeling like advertising](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-29.log.html#t2018-08-29T18:21:08). # PTG Coming Soon The PTG is next week! TC sessions are described on [this etherpad](https://etherpad.openstack.org/p/tc-stein-ptg). # Elections Reminder TC [election season](https://governance.openstack.org/election/) is right now. Nomination period ends at the end of the day (UTC) 6th of September so there isn't much time left. If you're toying with the idea, nominate yourself, the community wants your input. If you have any questions please feel free to ask. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From me at not.mn Tue Sep 4 21:07:03 2018 From: me at not.mn (John Dickinson) Date: Tue, 04 Sep 2018 14:07:03 -0700 Subject: [openstack-dev] [election][tc] announcing candidacy In-Reply-To: References: Message-ID: On 4 Sep 2018, at 12:16, Julia Kreger wrote: > Greetings Stackers! > > I hereby announce my candidacy for a position on the OpenStack > Technical > Committee. > > In many respects I consider myself a maverick, except reality is > sometimes > entirely different than my own self perception, upon reflection. > I find self reflection and introspection to be powerful tools, along > with > passion and desire for the common good. That desire for the common > good > is the driving force behind my involvement in OpenStack, which I hope > to > see as a vibrant and thriving community for years to come. > > Have things changed? Yes, I think they are ever evolving. I think we > can only > take the logical paths that we see before us at the time. Does this > mean > we will make mistakes? Absolutely, but mistakes are also opportunities > to learn and evolve as time goes on; which perhaps is an unspoken > backbone > of our community. The key is that we must not fear change but embrace > it. > > Changing our community for the better is a process we can only take > one step at a time, and we must recognize our strength > is in our diversity. As we move forward, as we evolve, we need to keep > in > mind our goals and overall vision. In a sense, these things vary > across all > projects, but our central community vision and goal helps provide > direction. > > As we continue our journey, I believe we need to lift up new > contributors, > incorporate new thoughts, and new ideas. Embracing change while > keeping our > basic course so new contributors can better find and integrate with > our > community as we continue forward. We need to listen and take that as > feedback to better understand other perspectives, for it is not only > our singular personal perspective which helps give us direction, > but the community as a whole. > > For those who do not know me well my name is Julia Ashley Kreger. > Often > I can be found on IRC as TheJulia, in numerous OpenStack related > channels. > I have had the pleasure of serving the community this past year on the > Technical Committee. I have also served the ironic community quite a > bit > during my time in the OpenStack community, which began during the Juno > cycle. > > I am the current Project Team Lead for the Ironic team. I began > serving in that capacity starting with the Rocky cycle. Prior, > I served as the team's release liaison. You might have seen me as one > of those crazy people advocating for standalone usage. Prior lives > included deploying and operating complex systems, but that is enough > about me. > > Ultimately I believe I bring a different perspective to the TC and it > is for > this, and my many strong beliefs and experiences, I feel I am well > suited > > to serve the community for another year on the Technical Committee. > > Thank you for your consideration, > > Julia > > freenode: TheJulia > Twitter: @ashinclouds > https://www.openstack.org/community/members/profile/19088/julia-kreger > http://stackalytics.com/?release=all&user_id=juliaashleykreger > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Julia, Do you have any specific examples of new ideas you are wanting to propose or advocate for, should you be re-elected? --John From melwittt at gmail.com Tue Sep 4 21:13:33 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 4 Sep 2018 14:13:33 -0700 Subject: [openstack-dev] [nova] No weekly meeting on Thursday September 13 Message-ID: Hi everyone, This is just a reminder we won't have a weekly Nova meeting on Thursday September 13 because of PTG week. The next meeting will be on Thursday September 20 at 1400 UTC [1]. Cheers, -melanie [1] https://wiki.openstack.org/wiki/Meetings/Nova From openstack at fried.cc Tue Sep 4 21:16:31 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 4 Sep 2018 16:16:31 -0500 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> Message-ID: > 030 is okay as long as nothing goes wrong. If something does it > raises exceptions which would currently fail as the exceptions are > not there. See below for more about exceptions. Maybe I'm misunderstanding what these migration thingies are supposed to be doing, but 030 [1] seems like it's totally not applicable to placement and should be removed. The placement database doesn't (and shouldn't) have 'flavors', 'cell_mappings', or 'host_mappings' tables in the first place. What am I missing? > * Presumably we can trim the placement DB migrations to just stuff >   that is relevant to placement Yah, I would hope so. What possible reason could there be to do otherwise? -efried [1] https://github.com/openstack/placement/blob/2f1267c8785138c8f2c9495bd97e6c2a96c7c336/placement/db/sqlalchemy/api_migrations/migrate_repo/versions/030_require_cell_setup.py From melwittt at gmail.com Tue Sep 4 21:27:56 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 4 Sep 2018 14:27:56 -0700 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> Message-ID: <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> On Tue, 4 Sep 2018 16:16:31 -0500, Eric Fried wrote: >> 030 is okay as long as nothing goes wrong. If something does it >> raises exceptions which would currently fail as the exceptions are >> not there. See below for more about exceptions. > Maybe I'm misunderstanding what these migration thingies are supposed to > be doing, but 030 [1] seems like it's totally not applicable to > placement and should be removed. The placement database doesn't (and > shouldn't) have 'flavors', 'cell_mappings', or 'host_mappings' tables in > the first place. > > What am I missing? > >> * Presumably we can trim the placement DB migrations to just stuff >>   that is relevant to placement > Yah, I would hope so. What possible reason could there be to do otherwise? Yes, we should definitely trim the placement DB migrations to only things relevant to placement. And we can use this opportunity to get rid of cruft too and squash all of the placement migrations together to start at migration 1 for the placement repo. If anyone can think of a problem with doing that, please shout it out. -melanie From mriedemos at gmail.com Tue Sep 4 21:29:45 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 4 Sep 2018 16:29:45 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-31 update Message-ID: <84f4ab63-5790-1ba8-7be2-eadc98f3b3ae@gmail.com> Just a few updates this week. 1. The story is now populated with a task per project that may have something to complete for this goal [1]. PTLs, or their liaison(s), should assign the task for their project to whomever is going to work on the goal. The goal document in governance is being updated with the appropriate links to storyboard [2]. 2. While populating the story and determining which projects to omit (like infra, docs, QA were obvious), I left in the deployment projects but those likely can/should opt-out of this goal for Stein since the goal is more focused on service projects like keystone/cinder/glance. I have pushed a docs updated to the goal with respect to deployment projects [3]. For deployment projects that don't plan on doing anything with this goal, feel free to just invalidate the task in storyboard for your project. 3. I have a developer/contributor reference docs patch up for review in nova [4] which is hopefully written generically enough that it can be consumed by and used as a guide for other projects implementing these upgrade checks. 4. I've proposed an amendment to the completion criteria for the goal [5] saying that projects with the "supports-upgrade" tag should integrate the checks from their project with their upgrade CI testing job. That could be grenade or some other upgrade testing framework, but it stands to reason that a project which claims to support upgrades and has automated checks for upgrades, should be running those in their CI. Let me know if there are any questions. There will also be some time during a PTG lunch-and-learn session where I'll go over this goal at a high level, so feel free to ask questions during or after that at the PTG as well. [1] https://storyboard.openstack.org/#!/story/2003657 [2] https://review.openstack.org/#/c/599759/ [3] https://review.openstack.org/#/c/599835/ [4] https://review.openstack.org/#/c/596902/ [5] https://review.openstack.org/#/c/599849/ -- Thanks, Matt From cdent+os at anticdent.org Tue Sep 4 21:36:42 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 4 Sep 2018 22:36:42 +0100 (BST) Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> Message-ID: On Tue, 4 Sep 2018, Eric Fried wrote: >> 030 is okay as long as nothing goes wrong. If something does it >> raises exceptions which would currently fail as the exceptions are >> not there. See below for more about exceptions. > > Maybe I'm misunderstanding what these migration thingies are supposed to > be doing, but 030 [1] seems like it's totally not applicable to > placement and should be removed. The placement database doesn't (and > shouldn't) have 'flavors', 'cell_mappings', or 'host_mappings' tables in > the first place. > > What am I missing? Nothing, as far as I can tell, but as we hadn't had a clear plan about how to proceed with the trimming of migrations, I've been trying to point out where they form little speed bumps as we've gone through this process and carried them with us. And tried to annotate where there may present some more, until we trim them. There are numerous limits to my expertise, and the db migrations is one of several areas where I decided I wasn't going to hold the ball, I'd just get us to the game and hope other people would find and fill in the blanks. That seems to be working okay, so far. >> * Presumably we can trim the placement DB migrations to just stuff >>   that is relevant to placement > > Yah, I would hope so. What possible reason could there be to do otherwise? Mel's plans looks good to me. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From lbragstad at gmail.com Tue Sep 4 21:40:19 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 4 Sep 2018 16:40:19 -0500 Subject: [openstack-dev] [election][tc] TC nomination Message-ID: Hi all, I'd like to submit my candidacy to be a member of the OpenStack Technical Committee. My involvement with OpenStack began during the Diablo release. Since then I've participated in various parts of the community, in both upstream and downstream roles. Today I mainly focus on authorization and identity management. As your elected member of the Technical Committee, I plan to continue advocating for cross-project initiatives and easing cross-project collaboration wherever possible. One area where I'm heavily invested in this type of work is improving OpenStack's authorization system. For example, I've championed a community goal [0], which eases policy maintenance and upgrades for operators. I've also contributed to the improvement of oslo libraries, making it easier for other services to change policies and consume authorization attributes. I believe isolating policy from service-specific logic is crucial in letting developers securely implement system-level and project-level APIs. Finally, I worked to revive a thread from 2015 [1] that allows us to deliver better support for default roles out-of-the-box [2]. This will reduce custom policies found in most deployments, enabling better interoperability between clouds and push OpenStack to be more self-service than it is today. There is still more work to do, but all of this makes API protection easier to implement while giving more functionality and security to end-users and operators. Based upon the few examples shared above, I think it's imperative to approach cross-project initiatives in a hands-on manner. As a member of the TC, I plan to spend my time helping projects close the gap on goals accepted by the TC by contributing to them directly. Additionally, I want to use that experience to collaborate with others and find ways to make achieving efforts across projects more common than it is today, as opposed to monolithic efforts that commonly result in burnout and exhaustion for a select few people. Tracking Rocky community goals specifically shows that 50% of projects are still implementing, reviewing, or have yet to start mutable configuration. 61% are in the same boat for removing usage of mox. Some efforts take years to successfully complete across projects (e.g. volume multi-attach, adopting new API versions). Whether the initiatives are a focused effort between two projects, or a community-wide goal, they provide significant value to everyone consuming, deploying, or developing the software we write. I'm running for TC because I want to do what I can to make cross-project interaction easier through contributing and building necessary process as a TC member. Thanks for reading through my candidacy. Safe travels to Denver and hopefully I'll see you at the PTG. Lance [0] https://governance.openstack.org/tc/goals/queens/policy-in-code.html [1] https://review.openstack.org/#/c/245629 [2] https://review.openstack.org/#/c/566377 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Sep 4 23:05:12 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 4 Sep 2018 23:05:12 +0000 Subject: [openstack-dev] Run for a seat on the TC (I am, and you can too!) In-Reply-To: <20180831001034.GS26778@thor.bakeyournoodle.com> References: <20180831001034.GS26778@thor.bakeyournoodle.com> Message-ID: <20180904230512.iu2ftvmvvnvl5zzs@yuggoth.org> I'm standing for reelection to a third term on the OpenStack Technical Committee. Rather than the usual platform where I drone on and on about myself, I'm going to take this opportunity to run a public service announcement instead. Going into my second term I stated, "my personal vision for OpenStack is that of a vibrant and diverse community" and I think we've made progress on that front but still have a long way to go. The diversity (personal, professional and cultural) of our community has increased steadily, but we won't be able to sustain it if similar diversity among elected leaders continues to lag behind. Our leaders are frequently the faces of our community to those outside it, and should set an example for others to follow. OpenStack is built on a promise of representative leadership, and so we need people from under-represented segments of our community to stand and take up the challenge. There's just over 48 hours left to send in your self-nomination for a seat on the TC. If you can't meet the time commitments but know someone who can, encourage them to run. And, of course, when the time comes, vote. Vote for the candidates who best represent you, your beliefs, your ideals. Vote for those who share your vision for who, how and what OpenStack should be. If you've been paying attention these past years, you already know my opinions. I'd rather hear some new opinions from you. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Tue Sep 4 23:39:35 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 4 Sep 2018 18:39:35 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-31 update In-Reply-To: <84f4ab63-5790-1ba8-7be2-eadc98f3b3ae@gmail.com> References: <84f4ab63-5790-1ba8-7be2-eadc98f3b3ae@gmail.com> Message-ID: Would it be helpful to factor some of the common code out into an Oslo library so projects basically just have to subclass, implement check functions, and add them to the _upgrade_checks dict? It's not a huge amount of code, but a bunch of it seems like it would need to be copy-pasted into every project. I have a tentative topic on the Oslo PTG schedule for this but figured I should check if it's something we even want to pursue. On 09/04/2018 04:29 PM, Matt Riedemann wrote: > Just a few updates this week. > > 1. The story is now populated with a task per project that may have > something to complete for this goal [1]. PTLs, or their liaison(s), > should assign the task for their project to whomever is going to work on > the goal. The goal document in governance is being updated with the > appropriate links to storyboard [2]. > > 2. While populating the story and determining which projects to omit > (like infra, docs, QA were obvious), I left in the deployment projects > but those likely can/should opt-out of this goal for Stein since the > goal is more focused on service projects like keystone/cinder/glance. I > have pushed a docs updated to the goal with respect to deployment > projects [3]. For deployment projects that don't plan on doing anything > with this goal, feel free to just invalidate the task in storyboard for > your project. > > 3. I have a developer/contributor reference docs patch up for review in > nova [4] which is hopefully written generically enough that it can be > consumed by and used as a guide for other projects implementing these > upgrade checks. > > 4. I've proposed an amendment to the completion criteria for the goal > [5] saying that projects with the "supports-upgrade" tag should > integrate the checks from their project with their upgrade CI testing > job. That could be grenade or some other upgrade testing framework, but > it stands to reason that a project which claims to support upgrades and > has automated checks for upgrades, should be running those in their CI. > > Let me know if there are any questions. There will also be some time > during a PTG lunch-and-learn session where I'll go over this goal at a > high level, so feel free to ask questions during or after that at the > PTG as well. > > [1] https://storyboard.openstack.org/#!/story/2003657 > [2] https://review.openstack.org/#/c/599759/ > [3] https://review.openstack.org/#/c/599835/ > [4] https://review.openstack.org/#/c/596902/ > [5] https://review.openstack.org/#/c/599849/ > From openstack at nemebean.com Tue Sep 4 23:42:40 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 4 Sep 2018 18:42:40 -0500 Subject: [openstack-dev] [oslo] PTG Schedule Message-ID: <1f99321e-3829-0f1a-3fbc-d14f67c76795@nemebean.com> Hi, I've added some tentative times to the planning etherpad[1]. This is all subject to change but I wanted to get something out there for people to look at. If you have other project responsibilities that day please let me know if anything conflicts. As you can see we have a fair amount of flexibility in our timing. And of course if you have any last-minute topic additions feel free to make those as well. 1: https://etherpad.openstack.org/p/oslo-stein-ptg-planning -Ben From pabelanger at redhat.com Tue Sep 4 23:46:30 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 4 Sep 2018 19:46:30 -0400 Subject: [openstack-dev] non-candidacy for TC Message-ID: <20180904234630.GB18268@localhost.localdomain> Greetings, After a year on the TC, I've decided not to run for another term. I'd like to thank the other TC members for helping bring me up to speed over the last year and community for originally voting. There is always work to do, and I'd like to use this email to encourage everybody to strongly consider running for the TC if you are interested in the future of OpenStack. It is a great learning opportunity, great humans to work with and great project! Please do consider running if you are at all interested. Thanks again, Paul From doug at doughellmann.com Tue Sep 4 23:51:28 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 04 Sep 2018 19:51:28 -0400 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-31 update In-Reply-To: References: <84f4ab63-5790-1ba8-7be2-eadc98f3b3ae@gmail.com> Message-ID: <1536105072-sup-1783@lrrr.local> Excerpts from Ben Nemec's message of 2018-09-04 18:39:35 -0500: > Would it be helpful to factor some of the common code out into an Oslo > library so projects basically just have to subclass, implement check > functions, and add them to the _upgrade_checks dict? It's not a huge > amount of code, but a bunch of it seems like it would need to be > copy-pasted into every project. I have a tentative topic on the Oslo PTG > schedule for this but figured I should check if it's something we even > want to pursue. +1 if there's reusable bits. > > On 09/04/2018 04:29 PM, Matt Riedemann wrote: > > Just a few updates this week. > > > > 1. The story is now populated with a task per project that may have > > something to complete for this goal [1]. PTLs, or their liaison(s), > > should assign the task for their project to whomever is going to work on > > the goal. The goal document in governance is being updated with the > > appropriate links to storyboard [2]. > > > > 2. While populating the story and determining which projects to omit > > (like infra, docs, QA were obvious), I left in the deployment projects > > but those likely can/should opt-out of this goal for Stein since the > > goal is more focused on service projects like keystone/cinder/glance. I > > have pushed a docs updated to the goal with respect to deployment > > projects [3]. For deployment projects that don't plan on doing anything > > with this goal, feel free to just invalidate the task in storyboard for > > your project. > > > > 3. I have a developer/contributor reference docs patch up for review in > > nova [4] which is hopefully written generically enough that it can be > > consumed by and used as a guide for other projects implementing these > > upgrade checks. > > > > 4. I've proposed an amendment to the completion criteria for the goal > > [5] saying that projects with the "supports-upgrade" tag should > > integrate the checks from their project with their upgrade CI testing > > job. That could be grenade or some other upgrade testing framework, but > > it stands to reason that a project which claims to support upgrades and > > has automated checks for upgrades, should be running those in their CI. > > > > Let me know if there are any questions. There will also be some time > > during a PTG lunch-and-learn session where I'll go over this goal at a > > high level, so feel free to ask questions during or after that at the > > PTG as well. > > > > [1] https://storyboard.openstack.org/#!/story/2003657 > > [2] https://review.openstack.org/#/c/599759/ > > [3] https://review.openstack.org/#/c/599835/ > > [4] https://review.openstack.org/#/c/596902/ > > [5] https://review.openstack.org/#/c/599849/ > > > From doug at doughellmann.com Wed Sep 5 00:14:00 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 04 Sep 2018 20:14:00 -0400 Subject: [openstack-dev] non-candidacy for TC In-Reply-To: <20180904234630.GB18268@localhost.localdomain> References: <20180904234630.GB18268@localhost.localdomain> Message-ID: <1536106319-sup-3962@lrrr.local> Excerpts from Paul Belanger's message of 2018-09-04 19:46:30 -0400: > Greetings, > > After a year on the TC, I've decided not to run for another term. I'd > like to thank the other TC members for helping bring me up to speed over > the last year and community for originally voting. There is always work > to do, and I'd like to use this email to encourage everybody to strongly > consider running for the TC if you are interested in the future of > OpenStack. > > It is a great learning opportunity, great humans to work with and great > project! Please do consider running if you are at all interested. > > Thanks again, > Paul > Thank you for serving this year, Paul! Doug From juliaashleykreger at gmail.com Wed Sep 5 00:15:28 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 4 Sep 2018 17:15:28 -0700 Subject: [openstack-dev] [election][tc] announcing candidacy In-Reply-To: References: Message-ID: That is an excellent question John! The most specific thing that is weighing on my mind is elevating and supporting contributors. While this is not new, I think we as a community need to refocus on it because they are very fibers that make up the fabric of our community and ultimately the electorate. I also feel that we focus a bit too much on what is new without having the data to really back it up. With so many project teams and working groups, it is going to take time for the TC to really digest and attempt to draw any actionable direction from the health assessment that has been underway over the past few months. -Julia On Tue, Sep 4, 2018 at 2:07 PM John Dickinson wrote: > > > > On 4 Sep 2018, at 12:16, Julia Kreger wrote: > > > Greetings Stackers! > > > > I hereby announce my candidacy for a position on the OpenStack > > Technical > > Committee. > > > > In many respects I consider myself a maverick, except reality is > > sometimes > > entirely different than my own self perception, upon reflection. > > I find self reflection and introspection to be powerful tools, along > > with > > passion and desire for the common good. That desire for the common > > good > > is the driving force behind my involvement in OpenStack, which I hope > > to > > see as a vibrant and thriving community for years to come. > > > > Have things changed? Yes, I think they are ever evolving. I think we > > can only > > take the logical paths that we see before us at the time. Does this > > mean > > we will make mistakes? Absolutely, but mistakes are also opportunities > > to learn and evolve as time goes on; which perhaps is an unspoken > > backbone > > of our community. The key is that we must not fear change but embrace > > it. > > > > Changing our community for the better is a process we can only take > > one step at a time, and we must recognize our strength > > is in our diversity. As we move forward, as we evolve, we need to keep > > in > > mind our goals and overall vision. In a sense, these things vary > > across all > > projects, but our central community vision and goal helps provide > > direction. > > > > As we continue our journey, I believe we need to lift up new > > contributors, > > incorporate new thoughts, and new ideas. Embracing change while > > keeping our > > basic course so new contributors can better find and integrate with > > our > > community as we continue forward. We need to listen and take that as > > feedback to better understand other perspectives, for it is not only > > our singular personal perspective which helps give us direction, > > but the community as a whole. > > > > For those who do not know me well my name is Julia Ashley Kreger. > > Often > > I can be found on IRC as TheJulia, in numerous OpenStack related > > channels. > > I have had the pleasure of serving the community this past year on the > > Technical Committee. I have also served the ironic community quite a > > bit > > during my time in the OpenStack community, which began during the Juno > > cycle. > > > > I am the current Project Team Lead for the Ironic team. I began > > serving in that capacity starting with the Rocky cycle. Prior, > > I served as the team's release liaison. You might have seen me as one > > of those crazy people advocating for standalone usage. Prior lives > > included deploying and operating complex systems, but that is enough > > about me. > > > > Ultimately I believe I bring a different perspective to the TC and it > > is for > > this, and my many strong beliefs and experiences, I feel I am well > > suited > > > > to serve the community for another year on the Technical Committee. > > > > Thank you for your consideration, > > > > Julia > > > > freenode: TheJulia > > Twitter: @ashinclouds > > https://www.openstack.org/community/members/profile/19088/julia-kreger > > http://stackalytics.com/?release=all&user_id=juliaashleykreger > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Julia, > > Do you have any specific examples of new ideas you are wanting to > propose or advocate for, should you be re-elected? > > --John > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sukhdevkapur at gmail.com Wed Sep 5 00:49:27 2018 From: sukhdevkapur at gmail.com (Sukhdev Kapur) Date: Tue, 4 Sep 2018 17:49:27 -0700 Subject: [openstack-dev] [Neutron] Tungsten Fabric (formally OpenContrail) at Denver PTG In-Reply-To: References: Message-ID: Folks, This is a reminder of this event at OpenStack PTG in Denver. If you have not already RSVP'd please use the link below to do so. Couple of changes - this event is from 9-5 (not 9-6), and lunch is from 12:30-1:30pm (not 1:00-2:00). Look forward to seeing you there. regards -Sukhdev On Tue, Aug 28, 2018 at 6:36 PM Sukhdev Kapur wrote: > The Tungsten Fabric community invites you to join us at the OpenStack > > PTG in Denver > > to discuss and contribute to two great projects: Tungsten > > Fabric > > and Networking-OpenContrail > > . > > > We’ll be meeting on Tuesday, September 11, in Room Telluride B of Renaissance > Denver Stapleton Hotel from 9am - 6pm. Here’s the agenda: > > > 9am - 1:00 pm: *Networking-OpenContrail* > > Networking-OpenContrail is the OpenStack Neutron ML2 driver to > integrate Tungsten Fabric to OpenStack. It is designed to eventually > replace the old monolithic driver. > > This session will provide an update on the project, and then we’ll > discuss the next steps. > > > 1:00-2:00: Lunch > > > 2:00-6:00: *Tungsten **Fabric * > > In this session, we’ll start with a brief overview of Tungsten > Fabric for the benefit of those who may be new to the project. > > Then we’ll dive in deeper, discussing the Tungsten Fabric > architecture, developer workflow, getting patches into Gerrit, and building > and testing TF for OpenStack and Kubernetes. > > > > We hope you’ll join us: > > > *Register* for the PTG here > , > and > > Please let us know if you’ll attend these sessions: RSVP > > > > > Sukhdev Kapur and TF Networking Team > > IRC - Sukhdev > > sukhdevkapur at gmail.com > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Sep 5 00:49:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 4 Sep 2018 19:49:41 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-31 update In-Reply-To: References: <84f4ab63-5790-1ba8-7be2-eadc98f3b3ae@gmail.com> Message-ID: On 9/4/2018 6:39 PM, Ben Nemec wrote: > Would it be helpful to factor some of the common code out into an Oslo > library so projects basically just have to subclass, implement check > functions, and add them to the _upgrade_checks dict? It's not a huge > amount of code, but a bunch of it seems like it would need to be > copy-pasted into every project. I have a tentative topic on the Oslo PTG > schedule for this but figured I should check if it's something we even > want to pursue. Yeah I'm not opposed to trying to pull the nova stuff into a common library for easier consumption in other projects, I just haven't devoted the time for it, nor will I probably have the time to do it. If others are willing to investigate that it would be great. -- Thanks, Matt From fungi at yuggoth.org Wed Sep 5 00:54:53 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 Sep 2018 00:54:53 +0000 Subject: [openstack-dev] non-candidacy for TC In-Reply-To: <20180904234630.GB18268@localhost.localdomain> References: <20180904234630.GB18268@localhost.localdomain> Message-ID: <20180905005453.jn4jqsj7hsmjtkgn@yuggoth.org> On 2018-09-04 19:46:30 -0400 (-0400), Paul Belanger wrote: > After a year on the TC, I've decided not to run for another term. [...] Thank you for your service (past and future)! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amy at demarco.com Wed Sep 5 00:58:08 2018 From: amy at demarco.com (Amy Marrich) Date: Tue, 4 Sep 2018 17:58:08 -0700 Subject: [openstack-dev] non-candidacy for TC In-Reply-To: <20180904234630.GB18268@localhost.localdomain> References: <20180904234630.GB18268@localhost.localdomain> Message-ID: Thanks for all your contributions while on the TC Paul! Amy (spotz) On Tue, Sep 4, 2018 at 4:46 PM, Paul Belanger wrote: > Greetings, > > After a year on the TC, I've decided not to run for another term. I'd > like to thank the other TC members for helping bring me up to speed over > the last year and community for originally voting. There is always work > to do, and I'd like to use this email to encourage everybody to strongly > consider running for the TC if you are interested in the future of > OpenStack. > > It is a great learning opportunity, great humans to work with and great > project! Please do consider running if you are at all interested. > > Thanks again, > Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Sep 5 01:36:45 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 4 Sep 2018 20:36:45 -0500 Subject: [openstack-dev] [nova] No weekly meeting on Thursday September 13 In-Reply-To: References: Message-ID: <8b3eac26-9003-a6cb-bcc8-920ef903efcc@gmail.com> On 9/4/2018 4:13 PM, melanie witt wrote: > The next meeting will be on Thursday September 20 at 1400 UTC [1]. I'm assuming we're going to have a meeting *this* week though, right? -- Thanks, Matt From mriedemos at gmail.com Wed Sep 5 01:42:26 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 4 Sep 2018 20:42:26 -0500 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> Message-ID: <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> On 9/4/2018 4:27 PM, melanie witt wrote: > Yes, we should definitely trim the placement DB migrations to only > things relevant to placement. And we can use this opportunity to get rid > of cruft too and squash all of the placement migrations together to > start at migration 1 for the placement repo. If anyone can think of a > problem with doing that, please shout it out. Umm, nova-manage db sync creates entries in a sqlalchemy-migrate versions table, something like that, to track per database what the latest migration sync version has been. Based on that, and the fact I thought our DB extraction policy was to mostly tell operators to copy the nova_api database and throw it elsewhere in a placement database, then the migrate versions table is going to be saying you're at 061 and you can't start new migrations from 1 at that point, unless you wipe out that versions table after you copy the API DB. I could be wrong, but just copying the database, squashing/trimming the migration scripts and resetting the version to 1, and assuming things are going to be hunky dory doesn't sound like it will work to me. -- Thanks, Matt From 18800173600 at 163.com Wed Sep 5 01:58:53 2018 From: 18800173600 at 163.com (zhangwenqing) Date: Wed, 5 Sep 2018 09:58:53 +0800 (CST) Subject: [openstack-dev] Dose anyone have use Vitrage to build a mature project for RCA or any other purpose? In-Reply-To: References: Message-ID: <544dd684.4fbf.165a7742cea.Coremail.18800173600@163.com> Thanks for your reply!So how do you analyse the RCA?Do you ever use any statistics methods like time series or mathine learning methods?Or just use the method that Vitrage provides? zhangwenqing >Date: Tue, 4 Sep 2018 12:29:43 +0300 >From: Ifat Afek >To: "OpenStack Development Mailing List (not for usage questions)" > >Subject: Re: [openstack-dev] Dose anyone have use Vitrage to build a > mature project for RCA or any other purpose? >Message-ID: > >Content-Type: text/plain; charset="utf-8" > >On Tue, Sep 4, 2018 at 11:41 AM zhangwenqing <18800173600 at 163.com> wrote: > >> I want to use Vitrage for my AIOps project, but i can't get any relative >> information, and I think this is not a mature project.Does anyone have >> relative experience?Would you please give me some advice? >> > >Hi, > >Vitrage is used in production environments as part of Nokia CloudBand >Infrastructure Software and CloudBand Network Director products. The >project exists for three years now, and it is mature. >I'll be happy to help if you have other questions. > >Br, >Ifat >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From linghucongsong at 163.com Wed Sep 5 02:02:02 2018 From: linghucongsong at 163.com (linghucongsong) Date: Wed, 5 Sep 2018 10:02:02 +0800 (CST) Subject: [openstack-dev] [tricircle] Tricircle or Trio2o In-Reply-To: References: <7ed0df37.65a5.164f8954cf9.Coremail.linghucongsong@163.com> Message-ID: <6df6b0d7.464e.165a7770dd6.Coremail.linghucongsong@163.com> HI ! we have made the tri020 work with the master version openstack as the below pr。 https://review.openstack.org/#/c/596258/ in our plan in the next step we will make tri020 work with the tricircle。see below plan. https://etherpad.openstack.org/p/tricircle-stein-plan At 2018-08-03 22:57:06, "Andrea Franceschini" wrote: >Hello Ling, > >thank you for answering, I'm glad to see that Trio2o project will be >revived in the near future. > >Meanwhile it would be nice to know what approach people use to deploy >multi-site openstack. > >I mean, I've read somewhere about solutions using something like a >multi-site heat, but I failed to dig into this as I couldn't find any >resource. > >Thanks, > >Andrea > >Il giorno gio 2 ago 2018 alle ore 05:01 linghucongsong > ha scritto: >> >> HI Andrea ! >> Yes, just as you said.The tricircle is now only work for network.Because the trio2o do not >> as the openstack official project. so it is a long time nobody contribute to it. >> But recently In the next openstack stein circle. we have plan to make tricircle and >> trio2o work together in the tricircle stein plan. see below link: >> https://etherpad.openstack.org/p/tricircle-stein-plan >> After this fininsh we can play tricircle and tri2o2 together and make multisite openstack >> solutions more effictive. >> >> >> >> >> >> At 2018-08-02 00:55:30, "Andrea Franceschini" wrote: >> >Hello All, >> > >> >While I was looking for multisite openstack solutions I stumbled on >> >Tricircle project which seemed fairly perfect for the job except that >> >l it was split in two parts, tricircle itself for the network part and >> >Trio2o for all the rest. >> > >> >Now it seems that the Trio2o project is no longer maintained and I'm >> >wondering what other options exist for multisite openstack, stated >> >that tricircle seems more NFV oriented. >> > >> >Actually a heat multisite solution would work too, but I cannot find >> >any reference to this kind of solutions. >> > >> >Do you have any idea/advice? >> > >> >Thanks, >> > >> >Andrea >> > >> >__________________________________________________________________________ >> >OpenStack Development Mailing List (not for usage questions) >> >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed Sep 5 02:11:41 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 4 Sep 2018 22:11:41 -0400 Subject: [openstack-dev] [election] [tc] thank you Message-ID: After 2 years at the TC, I feel lucky enough to have been part of this group where I hope that my modest contributions helped to make OpenStack a bit better. I've learnt so many things and worked with a talented group where it's not easy every day, but we have made and will continue to progress in the future. I personally feel like some rotation needs to happen, therefore I won't run the current election. I don't plan to leave or anything, I just wanted to say "merci" to the community who gave me a chance to be part of this team. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From sangho at opennetworking.org Wed Sep 5 02:27:50 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Wed, 5 Sep 2018 10:27:50 +0800 Subject: [openstack-dev] [Neutron][SONA][networking-onos] Releasing stable/rocky branch for networking-onos Message-ID: Hello, all stable/rocky branch for networking-onos project (https://github.com/openstack/networking-onos ) has been released. If you want to test it, please follow all-in-one install instruction, https://wiki.onosproject.org/display/ONOS/All-in-one+Installation+Guide , replacing queens to rocky in local.conf file. For more detail, please check it out our SONA wiki page, https://wiki.onosproject.org/display/ONOS/SONA%3A+DC+Network+Virtualization Thank you, Sangho SONA and networking-onos development team -------------- next part -------------- An HTML attachment was scrubbed... URL: From s at cassiba.com Wed Sep 5 02:34:33 2018 From: s at cassiba.com (Samuel Cassiba) Date: Tue, 4 Sep 2018 19:34:33 -0700 Subject: [openstack-dev] [chef] State of the Kitchen: 7th Edition Message-ID: HTML: https://samuel.cassi.ba/state-of-the-kitchen-7th-edition This is the seventh installment of what is going on with Chef OpenStack. The goal is to give a quick overview to see our progress and what is on the menu. Feedback is always welcome on the content and of what you would like to see more. ### Notable Changes * Ironic is returning to [active development](https://review.openstack.org/#/q/topic:refactor-ironic-cookbook). This is currently targeting Rocky, but it will be backported as much as automated testing will allow. The cookbook currently works through to Tempest and InSpec, but resource constraints prohibit a more comprehensive test. * Chef OpenStack is on [docs.o.o](https://docs.openstack.org/openstack-chef/latest/)! It currently covers the Kitchen scenario, and needs more fleshed out. A more comprehensive deploy guide is in the making. * Sous Chefs released v5.2.1 of the [apache2](https://supermarket.chef.io/cookbooks/apache2) cookbook today. This will alleviate an issue with ports.conf conflicting between cookbook and package. * openstack/openstack-chef-repo has served us for many years, but nothing is an unmoving mover. Development has shifted over to openstack/openstack-chef and openstack-chef-repo will be ferried to the great bit bucket in the cloud. [o7](https://review.openstack.org/#/q/topic:retire-openstack-chef-repo) ### Integration * With the aforementioned repo retirement, integration has shifted to openstack/openstack-chef. * Docker stabilization efforts are looking good to introduce a containerized integration job for CentOS. Ubuntu still does not play nicely using Docker through Kitchen. This will result in gating jobs using both the Zuul-provided machine, as well as Docker. The focus is AIO at this time. ### Stabilization * fog-openstack 0.2 has been released, which makes a major change to how Keystone endpoints are handled. This is in anticipation for dropping a hard version string for Identity API versions. 0.2.1 has been released to [rubygems](https://rubygems.org/gems/fog-openstack), which will resolve the issues 0.2.0 exposed. For now, however, the client cookbook has been constrained to match ChefDK. The target for ChefDK to support fog-openstack 0.2 is, at this point, the unreleased ChefDK 3.3.0. [Further context.](http://lists.openstack.org/pipermail/openstack-dev/2018-September/134185.html) ### On The Menu *The Perfect (Indoor) Steak* * Kosher salt * Black pepper * 1 tbsp (15 ml) olive oil * 1 (8 to 12 ounce) boneless tenderloin, ribeye or strip steak 1. Set your immersion cooker to 130F (54.4C) -- y'all have one of these, right? 2. Generously season both sides with salt and pepper. 3. Place the steak in a medium zipper, or vacuum seal, bag. Seal with a vacuum sealer, or using the water immersion technique. 4. Place the bag in the water bath, and set the timer for 2 hours. This comes out to about medium-rare consistency. 5. After 2 hours, remove the steak from the water bath and pat very dry with paper towels. 6. Heat oil in a medium cast iron skillet over high heat until it shimmers. 7. Add steak and sear until well-browned, about 30 seconds per side. 8. Let rest for 5 minutes. 9. Enjoy. Your humble line cook, Samuel Cassiba (scas) From prometheanfire at gentoo.org Wed Sep 5 03:20:49 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 4 Sep 2018 22:20:49 -0500 Subject: [openstack-dev] check-requirements and you Message-ID: <20180905032049.llvlv2y3mjmw2fpx@gentoo.org> With the move to per-project requirements (aka divergent requirements) we started allowing projects to have differing exclusions and minimums. As long as projects still tested against upper-constraints we were good. Part of the reason why we use upper-constraints is to ensure that project A and project B are co-installable. This is especially useful to distro packagers who can then target upper-constraints for any package updates they need. Another reason is that we (the requirements team) reviews new global-requirements for code quality, licencing and the like, all of which are useful to Openstack as a whole. If a projects dependencies are compatible with the global list, and the global list is compatible with the upper-constraints, then the projects' dependencies are compatible with the upper-constraints. All the above lets us all work together and use any lib listed in global-requirements (at the upper-constraints version). This is all ensured by having the check-requirements job enabled for your project. It helps ensure co-installability, code quality, python version compatibility, etc. So please make sure that if you are running everything in your own zuul config (not project-config) that you have the check-requirements job as well. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From aj at suse.com Wed Sep 5 05:04:12 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 5 Sep 2018 07:04:12 +0200 Subject: [openstack-dev] check-requirements and you In-Reply-To: <20180905032049.llvlv2y3mjmw2fpx@gentoo.org> References: <20180905032049.llvlv2y3mjmw2fpx@gentoo.org> Message-ID: On 2018-09-05 05:20, Matthew Thode wrote: > With the move to per-project requirements (aka divergent requirements) > we started allowing projects to have differing exclusions and minimums. > As long as projects still tested against upper-constraints we were good. > > Part of the reason why we use upper-constraints is to ensure that > project A and project B are co-installable. This is especially useful > to distro packagers who can then target upper-constraints for any > package updates they need. Another reason is that we (the requirements > team) reviews new global-requirements for code quality, licencing and > the like, all of which are useful to Openstack as a whole. > > If a projects dependencies are compatible with the global list, and the > global list is compatible with the upper-constraints, then the > projects' dependencies are compatible with the upper-constraints. > > All the above lets us all work together and use any lib listed in > global-requirements (at the upper-constraints version). This is all > ensured by having the check-requirements job enabled for your project. > It helps ensure co-installability, code quality, python version > compatibility, etc. So please make sure that if you are running > everything in your own zuul config (not project-config) that you have > the check-requirements job as well. And also set up and run the lower-constraints jobs - you can use the new template openstack-lower-constraints-jobs for this, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From prometheanfire at gentoo.org Wed Sep 5 05:12:07 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 5 Sep 2018 00:12:07 -0500 Subject: [openstack-dev] check-requirements and you In-Reply-To: References: <20180905032049.llvlv2y3mjmw2fpx@gentoo.org> Message-ID: <20180905051207.ntz2ovjkwhcv6ldh@gentoo.org> On 18-09-05 07:04:12, Andreas Jaeger wrote: > On 2018-09-05 05:20, Matthew Thode wrote: > > With the move to per-project requirements (aka divergent requirements) > > we started allowing projects to have differing exclusions and minimums. > > As long as projects still tested against upper-constraints we were good. > > > > Part of the reason why we use upper-constraints is to ensure that > > project A and project B are co-installable. This is especially useful > > to distro packagers who can then target upper-constraints for any > > package updates they need. Another reason is that we (the requirements > > team) reviews new global-requirements for code quality, licencing and > > the like, all of which are useful to Openstack as a whole. > > > > If a projects dependencies are compatible with the global list, and the > > global list is compatible with the upper-constraints, then the > > projects' dependencies are compatible with the upper-constraints. > > > > All the above lets us all work together and use any lib listed in > > global-requirements (at the upper-constraints version). This is all > > ensured by having the check-requirements job enabled for your project. > > It helps ensure co-installability, code quality, python version > > compatibility, etc. So please make sure that if you are running > > everything in your own zuul config (not project-config) that you have > > the check-requirements job as well. > > > And also set up and run the lower-constraints jobs - you can use the new > template openstack-lower-constraints-jobs for this, > Of course!!! Lower-constraints are useful (again, mainly for packagers, but also to know the state of our dependencies). Specifying and testing lower-constraints means that there's less potential for bugs that are caused because a project missed the deprecation of some library feature that they used. It also means that we have a second set of constraints (one that's just for that project) that we know works and can be targeted if needed. This massively increases the flexibility of deployments and packaging. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From sundar.nadathur at intel.com Wed Sep 5 05:17:01 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Tue, 4 Sep 2018 22:17:01 -0700 Subject: [openstack-dev] [Neutron] [Cyborg] Cyborg-Neutron interaction for programmable NICs Message-ID: Hello Neutron folks,      There is emerging interest in programmable NICs that combine FPGAs and networking in different ways. I wrote up about one category of them here: https://etherpad.openstack.org/p/fpga-networking This was discussed at the Neutron meeting on Sep 3 [1]. This approach to programmable networking raises many questions. I have summarized them in this etherpad and proposed a possible solution. Please review this. We have a session in the PTG on Thursday (Sep 13) from 3:15 to 4:15 pm on this topic. Given the level of interest that we are seeing, I hope we get some agreement early enough that we can do at least some POCs in Stein cycle. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2018-09-03.log.html#t2018-09-03T21:43:48 Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Sep 5 07:25:26 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 05 Sep 2018 16:25:26 +0900 Subject: [openstack-dev] [QA] Rocky Retrospective Etherpad Message-ID: <165a89f2533.ff2603fb27903.6095130848737704899@ghanshyammann.com> Hi All, I have started an etherpad for a Rocky cycle retrospective for QA - https://etherpad.openstack.org/p/qa-rocky-retrospective This will be discussed in PTG on Tuesday 9.30-10.00 AM, so please add your feedback/comment before that. Everyone is welcome to add the feedback which help us to improve the required things in next cycle. -gmann From thierry at openstack.org Wed Sep 5 07:40:04 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 5 Sep 2018 09:40:04 +0200 Subject: [openstack-dev] Retiring openstack-infra/odsreg and openstack-infra/puppet-odsreg In-Reply-To: <770fe3b5-911c-ad05-d21f-25bcfa10ccf6@suse.com> References: <770fe3b5-911c-ad05-d21f-25bcfa10ccf6@suse.com> Message-ID: <436d385e-4648-84c4-bed8-482a4214fcc7@openstack.org> Andreas Jaeger wrote: > Puppet-odsreg is unused nowadays and it seems that odsreg is unused as > well. I'll propose changes to retire them with topic "retire-odsreg", So... we actually still used odsreg until recently for Forum submissions. The Berlin Forum is the first one where we won't use it, so I feel it's a bit too early to retire that codebase. If the current plan (which is to reuse the CFP app from the website) does not work, we'd certainly welcome having a plan B. If you REALLY want it gone NOW I guess we could just push it to GitHub and keep it there until we are 100% sure we won't need it anymore, but that sounds a bit silly. -- Thierry Carrez (ttx) From thierry at openstack.org Wed Sep 5 07:51:32 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 5 Sep 2018 09:51:32 +0200 Subject: [openstack-dev] [TC][Searchlight] Searchlight project missing from the OpenStack website In-Reply-To: References: Message-ID: <3486a9f3-a6e4-54e2-1831-2dc6122e895c@openstack.org> Trinh Nguyen wrote: > I'm not sure if I missed something but Searchlight is not listed in the > Software section of the OpenStack website [1]. Is it because Searchlight > has missed the Rocky cycle? Searchlight appears in "Shared services" at the bottom of: https://www.openstack.org/software/project-navigator/openstack-components#openstack-services Regards, -- Thierry Carrez (ttx) From dangtrinhnt at gmail.com Wed Sep 5 07:53:18 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 5 Sep 2018 16:53:18 +0900 Subject: [openstack-dev] [TC][Searchlight] Searchlight project missing from the OpenStack website In-Reply-To: <3486a9f3-a6e4-54e2-1831-2dc6122e895c@openstack.org> References: <3486a9f3-a6e4-54e2-1831-2dc6122e895c@openstack.org> Message-ID: Hi Thierry, Thanks. I saw it :) *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Wed, Sep 5, 2018 at 4:51 PM Thierry Carrez wrote: > Trinh Nguyen wrote: > > I'm not sure if I missed something but Searchlight is not listed in the > > Software section of the OpenStack website [1]. Is it because Searchlight > > has missed the Rocky cycle? > > Searchlight appears in "Shared services" at the bottom of: > > > https://www.openstack.org/software/project-navigator/openstack-components#openstack-services > > Regards, > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Sep 5 08:01:50 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 5 Sep 2018 10:01:50 +0200 Subject: [openstack-dev] [election] [tc] thank you In-Reply-To: References: Message-ID: <4671e7de-6155-a61d-1625-5487c7250e32@openstack.org> Emilien Macchi wrote: > After 2 years at the TC, I feel lucky enough to have been part of this > group where I hope that my modest contributions helped to make OpenStack > a bit better. > I've learnt so many things and worked with a talented group where it's > not easy every day, but we have made and will continue to progress in > the future. > I personally feel like some rotation needs to happen, therefore I won't > run the current election. > > I don't plan to leave or anything, I just wanted to say "merci" to the > community who gave me a chance to be part of this team. Thanks for your service, Emilien! Your quality input on both technical and social aspects of the TC activity helped us move forward over those past 2 years. Please consider serving again in the future! -- Thierry Carrez (ttx) From gmann at ghanshyammann.com Wed Sep 5 08:34:27 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 05 Sep 2018 17:34:27 +0900 Subject: [openstack-dev] [QA] [all] QA Stein PTG Planning Message-ID: <165a8de52ee.126eac3cc30493.5374930562208326542@ghanshyammann.com> Hi All, As we are close to PTG, I have prepared the QA Stein PTG Schedule - https://ethercalc.openstack.org/Stein-PTG-QA-Schedule Detail of each sessions can be found in this etherpad - https://etherpad.openstack.org/p/qa-stein-ptg This time we will have QA Help Hour for 1 day only which is on Monday and next 3 days for specific topic discussion and code sprint. We still have space for more sessions or topic if any of you would like to add. If so please write those to etherpad with your irc name. Sessions Scheduled is flexible and we can reschedule based on request but do let me know before 7th Sept. If anyone cannot travel to PTG and would like to attend remotely, do let me know i can plan something for remote participation. -gmann From tobias.urdin at binero.se Wed Sep 5 08:34:34 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Wed, 5 Sep 2018 10:34:34 +0200 Subject: [openstack-dev] [election] [tc] thank you In-Reply-To: References: Message-ID: <41debd25-4803-38c5-faa2-ab8e2fc5e38b@binero.se> Emilien, You've been an inspiration to the community and to me personally! Thanks for helping making OpenStack better in all aspects. Best regards Tobias On 09/05/2018 04:15 AM, Emilien Macchi wrote: > After 2 years at the TC, I feel lucky enough to have been part of this > group where I hope that my modest contributions helped to make > OpenStack a bit better. > I've learnt so many things and worked with a talented group where it's > not easy every day, but we have made and will continue to progress in > the future. > I personally feel like some rotation needs to happen, therefore I > won't run the current election. > > I don't plan to leave or anything, I just wanted to say "merci" to the > community who gave me a chance to be part of this team. > -- > Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Sep 5 08:50:54 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 5 Sep 2018 10:50:54 +0200 Subject: [openstack-dev] No Neutron QoS meeting on Tuesday 11.09 Message-ID: Hi, Due to PTG in Denver QoS meeting on Tuesday 11.09 is cancelled. Next one will be at 25.09 — Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Wed Sep 5 08:51:39 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 5 Sep 2018 10:51:39 +0200 Subject: [openstack-dev] Neutron CI meeting on Tuesday 11.09 Message-ID: <4AA6DB70-D2E9-44B2-8DB5-C784574E75F5@redhat.com> Hi, Due to PTG in Denver CI meeting on Tuesday 11.09 is cancelled. Next one will be at 18.09 — Slawek Kaplonski Senior software engineer Red Hat From iwienand at redhat.com Wed Sep 5 08:53:10 2018 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 5 Sep 2018 18:53:10 +1000 Subject: [openstack-dev] [all]-ish : Updates required for readthedocs publishers Message-ID: <7306a7a3-e42f-22d0-c229-0ce74e1cb2e4@redhat.com> Hello, If you're interested in the projects mentioned below, you may have noticed a new, failing, non-voting job "your-readthedocs-job-requires-attention". Spoiler alert: your readthedocs job requires attention. It's easy to miss because publishing happens in the post pipeline and people don't often look at the results of these jobs. Please see the prior email on this http://lists.openstack.org/pipermail/openstack-dev/2018-August/132836.html for what to do (if you read the failing job logs, it also points you to this). I (or #openstack-infra) can help, but only once the openstackci user is given permissions to the RTD project by its current owner. Thanks, -i The following projects have this job now: openstack-infra/gear openstack/airship-armada openstack/almanach openstack/ansible-role-bindep openstack/ansible-role-cloud-launcher openstack/ansible-role-diskimage-builder openstack/ansible-role-cloud-fedmsg openstack/ansible-role-cloud-gearman openstack/ansible-role-jenkins-job-builder openstack/ansible-role-logrotate openstack/ansible-role-ngix openstack/ansible-role-nodepool openstack/ansible-role-openstacksdk openstack/ansible-role-shade openstack/ansible-role-ssh openstack/ansible-role-sudoers openstack/ansible-role-virtualenv openstack/ansible-role-zookeeper openstack/ansible-role-zuul openstack/ara openstack/bareon openstack/bareon-allocator openstack/bareon-api openstack/bareon-ironic openstack/browbeat openstack/downpour openstack/fuel-ccp openstack/fuel-ccp-installer openstack/fuel-noop-fixtures openstack/ironic-staging-drivers openstack/k8s-docker-suite-app-murano openstack/kloudbuster openstack/nerd-reviewer openstack/networking-dpm openstack/nova-dpm openstack/ooi openstack/os-faults openstack/packetary openstack/packetary-specs openstack/performa openstack/poppy openstack/python-almanachclient openstack/python-jenkins openstack/rally openstack/solar openstack/sqlalchemy-migrate openstack/stackalytics openstack/surveil openstack/swauth openstack/turbo-hipster openstack/virtualpdu openstack/vmtp openstack/windmill openstack/yaql From dtantsur at redhat.com Wed Sep 5 09:06:07 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 5 Sep 2018 11:06:07 +0200 Subject: [openstack-dev] [all]-ish : Updates required for readthedocs publishers In-Reply-To: <7306a7a3-e42f-22d0-c229-0ce74e1cb2e4@redhat.com> References: <7306a7a3-e42f-22d0-c229-0ce74e1cb2e4@redhat.com> Message-ID: On 09/05/2018 10:53 AM, Ian Wienand wrote: > Hello, > > If you're interested in the projects mentioned below, you may have > noticed a new, failing, non-voting job > "your-readthedocs-job-requires-attention". Spoiler alert: your > readthedocs job requires attention. It's easy to miss because > publishing happens in the post pipeline and people don't often look > at the results of these jobs. > > Please see the prior email on this > > http://lists.openstack.org/pipermail/openstack-dev/2018-August/132836.html > > for what to do (if you read the failing job logs, it also points you > to this). > > I (or #openstack-infra) can help, but only once the openstackci user > is given permissions to the RTD project by its current owner. > > Thanks, > > -i > > The following projects have this job now: > > openstack-infra/gear > openstack/airship-armada > openstack/almanach > openstack/ansible-role-bindep > openstack/ansible-role-cloud-launcher > openstack/ansible-role-diskimage-builder > openstack/ansible-role-cloud-fedmsg > openstack/ansible-role-cloud-gearman > openstack/ansible-role-jenkins-job-builder > openstack/ansible-role-logrotate > openstack/ansible-role-ngix > openstack/ansible-role-nodepool > openstack/ansible-role-openstacksdk > openstack/ansible-role-shade > openstack/ansible-role-ssh > openstack/ansible-role-sudoers > openstack/ansible-role-virtualenv > openstack/ansible-role-zookeeper > openstack/ansible-role-zuul > openstack/ara > openstack/bareon > openstack/bareon-allocator > openstack/bareon-api > openstack/bareon-ironic > openstack/browbeat > openstack/downpour > openstack/fuel-ccp > openstack/fuel-ccp-installer > openstack/fuel-noop-fixtures > openstack/ironic-staging-drivers I'm pretty sure we updated this in Rocky. Is it here because of stable branches? I can propose a backport if so. > openstack/k8s-docker-suite-app-murano > openstack/kloudbuster > openstack/nerd-reviewer > openstack/networking-dpm > openstack/nova-dpm > openstack/ooi > openstack/os-faults > openstack/packetary > openstack/packetary-specs > openstack/performa > openstack/poppy > openstack/python-almanachclient > openstack/python-jenkins > openstack/rally > openstack/solar > openstack/sqlalchemy-migrate > openstack/stackalytics > openstack/surveil > openstack/swauth > openstack/turbo-hipster > openstack/virtualpdu > openstack/vmtp > openstack/windmill > openstack/yaql > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From pierre.gaxatte at gmail.com Wed Sep 5 09:31:39 2018 From: pierre.gaxatte at gmail.com (Pierre Gaxatte) Date: Wed, 5 Sep 2018 11:31:39 +0200 Subject: [openstack-dev] [mistral] Cherry-pick migration to stable/rocky Message-ID: Hello, Related change: https://review.openstack.org/#/c/599606 I'd like to push this to stable/rocky because this bug affects me in production and I'd like to be able to upgrade to rocky to fix this. I understand that new migrations should not be added to a stable branch and Renat Akhmerov advised me to ask here to make an exception. Now for some context on this bug: The `auth_context` in `delayed_calls_v2` holds the entire catalog provided by the client to run actions and currently it can hold 64kB on mysql (JsonDictType => TEXT field). Some of our customers have a large catalog because we expose many regions to them and a region weighs around 5kB in our catalog (many services, long URL for the endpoints...). So above around 10 regions presented to a project the catalog cannot be held in the `auth_context` field and no execution can be performed in mistral from this project. The change fixes that simply by increasing the maximum size of the field. So I'm seeking approval from the stable branch team to merge this change. Thanks, Pierre From dougal at redhat.com Wed Sep 5 09:52:41 2018 From: dougal at redhat.com (Dougal Matthews) Date: Wed, 5 Sep 2018 10:52:41 +0100 Subject: [openstack-dev] [mistral][release] Cherry-pick migration to stable/rocky Message-ID: On 5 September 2018 at 10:31, Pierre Gaxatte wrote: > Hello, > > Related change: https://review.openstack.org/#/c/599606 > > I'd like to push this to stable/rocky because this bug affects me in > production and I'd like to be able to upgrade to rocky to fix this. I > understand that new migrations should not be added to a stable branch > and Renat Akhmerov advised me to ask here to make an exception. > > Now for some context on this bug: > > The `auth_context` in `delayed_calls_v2` holds the entire catalog > provided by the client to run actions and currently it can hold 64kB on > mysql (JsonDictType => TEXT field). > > Some of our customers have a large catalog because we expose many > regions to them and a region weighs around 5kB in our catalog (many > services, long URL for the endpoints...). > So above around 10 regions presented to a project the catalog cannot be > held in the `auth_context` field and no execution can be performed in > mistral from this project. > > The change fixes that simply by increasing the maximum size of the field. > > So I'm seeking approval from the stable branch team to merge this change. > > Thanks, > Pierre > Thanks for sending this email. I am happy for the change to be backported, given the Rocky release is just out the door the changes are minimal and easy to apply. The impact is also high enough as it resolves problems for real production environments. We should add a release note to the backport, but I am happy for this to be added as a follow up (before we make the next Rocky release). (Note: I added [release] to the email subject, as I think that will make it visible to the right folks.) Cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Wed Sep 5 09:54:25 2018 From: dougal at redhat.com (Dougal Matthews) Date: Wed, 5 Sep 2018 10:54:25 +0100 Subject: [openstack-dev] [mistral] [release] [stable] Cherry-pick migration to stable/rocky Message-ID: On 5 September 2018 at 10:52, Dougal Matthews wrote: > (Note: I added [release] to the email subject, as I think that will make > it visible to the right folks.) > Darn. It should have been [stable]. I have added that now. Sorry for the noise. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jazeltq at gmail.com Wed Sep 5 10:20:28 2018 From: jazeltq at gmail.com (Jaze Lee) Date: Wed, 5 Sep 2018 18:20:28 +0800 Subject: [openstack-dev] [nova][cinder] about unified limits Message-ID: Hello, Does nova and cinder use keystone's unified limits api to do quota job? If not, is there a plan to do this? Thanks a lot. -- 谦谦君子 From jean-philippe at evrard.me Wed Sep 5 11:04:09 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Wed, 05 Sep 2018 13:04:09 +0200 Subject: [openstack-dev] =?utf-8?q?=5Belection=5D=5Btc=5D_TC_Candidacy?= Message-ID: <1fac-5b8fb800-1-7199a100@211223829> Hello everyone, I am hereby announcing my candidacy for a position on the OpenStack Technical Committee (TC). I believe that Open Source software is not only about code, but also a way to bring people together in order to find a solution to business problems. Many people find me an easy person to talk with, due to my open mindset and my "facilitator" spirit. Those qualities helped me build solutions in the past. I hope they will be helpful to the TC: While OpenStack is becoming more mature everyday, it is facing (and will face) new challenges in terms of community, or identity. I have been following the OpenStack ecosystem since Kilo. I went through multiple companies and multiple hats (A cloud end-user, an OpenStack advocate in meetups and at FOSDEM, a product owner of the cloud strategy, architect of a community cloud, a deployer, a developer, a team lead), which gives me a unique view on OpenStack and other adjacent communities. I am now working full time on OpenStack for SUSE, focusing on deployment tooling. That growing involvement inspires me to be a TC candidate. I would like to help shape what the future of OpenStack could be. Even if my experience spans quite a few cycles, I consider myself as an OpenStack newcomer. I like to see things with fresh eyes, and I do not hesitate to question the status quo. It usually gives me a different perspective to explore new conversations or find new solutions. I also think this freshness makes me very approachable to new community members, new users, or external communities. Listening to those inputs is very important to me: good software can only exist with proper requirements! I would like to focus my time at the TC with a general simplification of OpenStack. Simplification would first *reduce the barrier of entry for new contributors*, make community goals more easily reachable, and help connecting adjacent communities. In this matter, I consider the technical 'python3-first' project will open the door to many positive improvements and simplifications, but setting up a good knowledge transfer platform and best practices/recommendations for projects could be a positive improvement. Talking about best practices and simplification, I would like to help PTLs at their duties, as I consider TC members should be more supportive of the day to day work of the PTL and projects. I would love to see the TC as provider of toolkits helping maintain and grow the community of each of the official projects. These could be the tools that projects do not have time to develop and grow. The code would be common to OpenStack, and therefore reducing the overall complexity of projects which were carrying those tools, the same way as the Release or OpenStack-Infra team are providing tools for the community. I have a few other ideas for simplifications but instead of carrying on, I would prefer hearing from you, and hearing from your ideas. So, please, contact me! Long story short: I would like to be there to help shaping the community together, with your help. Thank you for your consideration, Jean-Philippe Evrard (evrardjp) From aj at suse.com Wed Sep 5 11:08:11 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 5 Sep 2018 13:08:11 +0200 Subject: [openstack-dev] Retiring openstack-infra/odsreg and openstack-infra/puppet-odsreg In-Reply-To: <436d385e-4648-84c4-bed8-482a4214fcc7@openstack.org> References: <770fe3b5-911c-ad05-d21f-25bcfa10ccf6@suse.com> <436d385e-4648-84c4-bed8-482a4214fcc7@openstack.org> Message-ID: <35325771-8ca9-15bc-8623-09df4344d8b8@suse.com> On 2018-09-05 09:40, Thierry Carrez wrote: > Andreas Jaeger wrote: >> Puppet-odsreg is unused nowadays and it seems that odsreg is unused as >> well. I'll propose changes to retire them with topic "retire-odsreg", > > So... we actually still used odsreg until recently for Forum > submissions. The Berlin Forum is the first one where we won't use it, so > I feel it's a bit too early to retire that codebase. If the current plan > (which is to reuse the CFP app from the website) does not work, we'd > certainly welcome having a plan B. > > If you REALLY want it gone NOW I guess we could just push it to GitHub > and keep it there until we are 100% sure we won't need it anymore, but > that sounds a bit silly. > OK, I'll retire puppet-odsreg only for now - please tell us once odsreg can be retired as well, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From mnaser at vexxhost.com Wed Sep 5 12:14:53 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 5 Sep 2018 08:14:53 -0400 Subject: [openstack-dev] [election] [tc] thank you In-Reply-To: References: Message-ID: <9C816173-5AB1-41EB-AC55-B343C8A51600@vexxhost.com> Émilien: I think you’re one of the role models of our community. Your leadership has helped make it easier for me to become more involved from leading a project to joining the TC. Thank you again! Regards, Mohammed Sent from my iPhone > On Sep 4, 2018, at 10:11 PM, Emilien Macchi wrote: > > After 2 years at the TC, I feel lucky enough to have been part of this group where I hope that my modest contributions helped to make OpenStack a bit better. > I've learnt so many things and worked with a talented group where it's not easy every day, but we have made and will continue to progress in the future. > I personally feel like some rotation needs to happen, therefore I won't run the current election. > > I don't plan to leave or anything, I just wanted to say "merci" to the community who gave me a chance to be part of this team. > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From rosmaita.fossdev at gmail.com Wed Sep 5 12:47:12 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 5 Sep 2018 08:47:12 -0400 Subject: [openstack-dev] [heat][glance] Heat image resource support issue In-Reply-To: References: Message-ID: On Thu, Aug 30, 2018 at 5:55 AM Rico Lin wrote: > Hi Glance team > Hi Rico, sorry about the delay in answering your email. > Glance V1 image API been deprecated for a long while, and V2 has been used > widely. Heat image resource would like to change to use V2 as well, but > there is an unsolved issue, which blocks us from adopting V2. Right now, to > image create require Heat to download the image to Heat service and > re-upload it to Glance. This behavior causes heat service a great burden > which in a heat team discussion (years ago), we decide to deprecated V1 > Image resource in Heat and will add V2 image resource once this is > resolved. > So I have been wondering if there's some workaround for this issue? Or if > glance can support accept URL as image import (and then reuse client lib > to download to glance side)? Or if anyone got a better solution for this? > Since Queens, Glance has had a 'web-download' import method that takes a URL [0]. It's enabled by default, but operators do have the ability to turn it off. (There's an API call to see what methods are enabled in a particular cloud.) Operators also have the ability to restrict what URLs are acceptable [1], but that's probably a good thing. In short, Glance does have the ability to do what you need since Queens, but there's no guarantee that it will be available in all clouds and for all URLs. If you foresee that as a problem, it would be a good idea to get together with the Glance team at the PTG to discuss this issue. Please add it as a topic to the Glance PTG planning etherpad [3] as soon as you can. cheers, brian [0] https://developer.openstack.org/api-ref/image/v2/index.html#interoperable-image-import [1] https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html#configuring-the-web-download-method [3] https://etherpad.openstack.org/p/stein-ptg-glance-planning > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Sep 5 13:25:09 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 Sep 2018 13:25:09 +0000 Subject: [openstack-dev] [election] [tc] thank you In-Reply-To: References: Message-ID: <20180905132509.crg67lkklpmicncu@yuggoth.org> On 2018-09-04 22:11:41 -0400 (-0400), Emilien Macchi wrote: > After 2 years at the TC, I feel lucky enough to have been part of this > group where I hope that my modest contributions helped to make OpenStack a > bit better. [...] I think they have. I've always valued your input, and hope you continue providing it whether or not you're serving on the TC. Thanks, really! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From lbragstad at gmail.com Wed Sep 5 13:29:24 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 5 Sep 2018 08:29:24 -0500 Subject: [openstack-dev] [nova][cinder] about unified limits In-Reply-To: References: Message-ID: Not yet. Keystone worked through a bunch of usability improvements with the unified limits API last release and created the oslo.limit library. We have a patch or two left to land in oslo.limit before projects can really start using unified limits [0]. We're hoping to get this working with at least one resource in another service (nova, cinder, etc...) in Stein. [0] https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee wrote: > Hello, > Does nova and cinder use keystone's unified limits api to do quota > job? > If not, is there a plan to do this? > Thanks a lot. > > -- > 谦谦君子 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Wed Sep 5 13:30:40 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 05 Sep 2018 13:30:40 -0000 Subject: [openstack-dev] kolla 7.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for kolla for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/kolla/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/kolla/log/?h=stable/rocky Release notes for kolla can be found at: https://docs.openstack.org/releasenotes/kolla/ From dms at danplanet.com Wed Sep 5 13:39:25 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 05 Sep 2018 06:39:25 -0700 Subject: [openstack-dev] [nova] [placement] extraction (technical) update References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> Message-ID: >> Yes, we should definitely trim the placement DB migrations to only >> things relevant to placement. And we can use this opportunity to get >> rid of cruft too and squash all of the placement migrations together >> to start at migration 1 for the placement repo. If anyone can think >> of a problem with doing that, please shout it out. I agree, FWIW. > Umm, nova-manage db sync creates entries in a sqlalchemy-migrate > versions table, something like that, to track per database what the > latest migration sync version has been. > > Based on that, and the fact I thought our DB extraction policy was to > mostly tell operators to copy the nova_api database and throw it > elsewhere in a placement database, then the migrate versions table is > going to be saying you're at 061 and you can't start new migrations > from 1 at that point, unless you wipe out that versions table after > you copy the API DB. They can do this, sure. However, either we'll need migrations to delete all the nova-api-related tables, or they will need to trim them manually. If we do the former, then everyone who ever installs placement from scratch will go through the early history of nova-api only to have that removed. Or we trim those off the front, but we have to keep the collapsing migrations until we compact again, etc. The thing I'm more worried about is operators being surprised by this change (since it's happening suddenly in the middle of a release), noticing some split, and then realizing that if they just point the placement db connection at nova_api everything seems to work. That's going to go really bad when things start to diverge. > I could be wrong, but just copying the database, squashing/trimming > the migration scripts and resetting the version to 1, and assuming > things are going to be hunky dory doesn't sound like it will work to > me. Why not? I think the safest/cleanest thing to do here is renumber placement-related migrations from 1, and provide a script or procedure to dump just the placement-related tables from the nova_api database to the new one (not including the sqlalchemy-migrate versions table). --Dan From tenobreg at redhat.com Wed Sep 5 13:43:47 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Wed, 5 Sep 2018 10:43:47 -0300 Subject: [openstack-dev] [sahara] No meeting on Sep 6th and Sep 13th Message-ID: Hi folks, as we approach the PTG and we will have enough time to talk over problems face to face I'm canceling our team meeting this week and next week as well. See you in Denver. -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Sep 5 13:46:36 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 05 Sep 2018 09:46:36 -0400 Subject: [openstack-dev] [election] [tc] thank you In-Reply-To: References: Message-ID: <1536155150-sup-1370@lrrr.local> Excerpts from Emilien Macchi's message of 2018-09-04 22:11:41 -0400: > After 2 years at the TC, I feel lucky enough to have been part of this > group where I hope that my modest contributions helped to make OpenStack a > bit better. > I've learnt so many things and worked with a talented group where it's not > easy every day, but we have made and will continue to progress in the > future. > I personally feel like some rotation needs to happen, therefore I won't run > the current election. > > I don't plan to leave or anything, I just wanted to say "merci" to the > community who gave me a chance to be part of this team. Thank you, Emilien. I've always appreciated having your perspective and assistance on the TC. Doug From mnaser at vexxhost.com Wed Sep 5 13:47:36 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 5 Sep 2018 09:47:36 -0400 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> Message-ID: Could placement not do what happened for a while when the nova_api database was created? I say this because I know that moving the database is a huge task for us, considering how big it can be in certain cases for us, and it means control plane outage too On Wed, Sep 5, 2018 at 9:39 AM Dan Smith wrote: > > >> Yes, we should definitely trim the placement DB migrations to only > >> things relevant to placement. And we can use this opportunity to get > >> rid of cruft too and squash all of the placement migrations together > >> to start at migration 1 for the placement repo. If anyone can think > >> of a problem with doing that, please shout it out. > > I agree, FWIW. > > > Umm, nova-manage db sync creates entries in a sqlalchemy-migrate > > versions table, something like that, to track per database what the > > latest migration sync version has been. > > > > Based on that, and the fact I thought our DB extraction policy was to > > mostly tell operators to copy the nova_api database and throw it > > elsewhere in a placement database, then the migrate versions table is > > going to be saying you're at 061 and you can't start new migrations > > from 1 at that point, unless you wipe out that versions table after > > you copy the API DB. > > They can do this, sure. However, either we'll need migrations to delete > all the nova-api-related tables, or they will need to trim them > manually. If we do the former, then everyone who ever installs placement > from scratch will go through the early history of nova-api only to have > that removed. Or we trim those off the front, but we have to keep the > collapsing migrations until we compact again, etc. > > The thing I'm more worried about is operators being surprised by this > change (since it's happening suddenly in the middle of a release), > noticing some split, and then realizing that if they just point the > placement db connection at nova_api everything seems to work. That's > going to go really bad when things start to diverge. > > > I could be wrong, but just copying the database, squashing/trimming > > the migration scripts and resetting the version to 1, and assuming > > things are going to be hunky dory doesn't sound like it will work to > > me. > > Why not? > > I think the safest/cleanest thing to do here is renumber placement-related > migrations from 1, and provide a script or procedure to dump just the > placement-related tables from the nova_api database to the new one (not > including the sqlalchemy-migrate versions table). > > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From cdent+os at anticdent.org Wed Sep 5 13:53:21 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 5 Sep 2018 14:53:21 +0100 (BST) Subject: [openstack-dev] [api] microversion-parse core updates Message-ID: After some discussion with other cores I've made some adjustments to the core team on microversion-parse [1] * added dtantsur (welcome!) * removed sdague In case you're not aware, microversion-parse is middleware and utilities for managing microversions in openstack service apis. [1] https://pypi.org/project/microversion_parse/ http://git.openstack.org/cgit/openstack/microversion-parse -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mriedemos at gmail.com Wed Sep 5 13:56:24 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 5 Sep 2018 08:56:24 -0500 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> Message-ID: <8cc82109-8b54-0314-b171-4d1b54a94cb6@gmail.com> On 9/5/2018 8:39 AM, Dan Smith wrote: > Why not? Because of the versions table as noted earlier. Up until this point no one had mentioned that but it would be an issue. > > I think the safest/cleanest thing to do here is renumber placement-related > migrations from 1, and provide a script or procedure to dump just the > placement-related tables from the nova_api database to the new one (not > including the sqlalchemy-migrate versions table). I'm OK with squashing/trimming/resetting the version to 1. What was not mentioned earlier in this thread was (1) an acknowledgement that we'd need to drop the versions table to reset the version in the new database and (2) any ideas about providing scripts to help with the DB migration. -- Thanks, Matt From sbauza at redhat.com Wed Sep 5 14:16:53 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 5 Sep 2018 16:16:53 +0200 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: <8cc82109-8b54-0314-b171-4d1b54a94cb6@gmail.com> References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> <8cc82109-8b54-0314-b171-4d1b54a94cb6@gmail.com> Message-ID: On Wed, Sep 5, 2018 at 3:56 PM, Matt Riedemann wrote: > On 9/5/2018 8:39 AM, Dan Smith wrote: > >> Why not? >> > > Because of the versions table as noted earlier. Up until this point no one > had mentioned that but it would be an issue. > > >> I think the safest/cleanest thing to do here is renumber placement-related >> migrations from 1, and provide a script or procedure to dump just the >> placement-related tables from the nova_api database to the new one (not >> including the sqlalchemy-migrate versions table). >> > > I'm OK with squashing/trimming/resetting the version to 1. What was not > mentioned earlier in this thread was (1) an acknowledgement that we'd need > to drop the versions table to reset the version in the new database and (2) > any ideas about providing scripts to help with the DB migration. > > I think it's safe too. Operators could just migrate the tables by using a read-only slave connection to a new DB and then using this script that would drop the non-needed tables. For people wanting to migrate tables, I think having placement versions being different is not a problem given the tables are the same. -Sylvain -- > > Thanks, > > Matt > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Sep 5 14:43:53 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 5 Sep 2018 10:43:53 -0400 Subject: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core In-Reply-To: References: Message-ID: Hi Andy: I made a metal note of replying to this but I never got a chance to :( It was great working with you, you're always welcome back anytime! :) Thanks, Mohammed On Mon, Sep 3, 2018 at 3:32 AM Hugh Saunders wrote: > > Thanks for all your hard work on OSA Andy :) > > On Thu, 30 Aug 2018 at 18:41 Andy McCrae wrote: >> >> Now that Rocky is all but ready it seems like a good time! Since changing roles I've not been able to keep up enough focus on reviews and other obligations - so I think it's time to step aside as a core reviewer. >> >> I want to say thanks to everybody in the community, I'm really proud to see the work we've done and how the OSA team has grown. I've learned a tonne from all of you - it's definitely been a great experience. >> >> Thanks, >> Andy >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mriedemos at gmail.com Wed Sep 5 14:56:59 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 5 Sep 2018 09:56:59 -0500 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> Message-ID: On 9/5/2018 8:47 AM, Mohammed Naser wrote: > Could placement not do what happened for a while when the nova_api > database was created? Can you be more specific? I'm having a brain fart here and not remembering what you are referring to with respect to the nova_api DB. > > I say this because I know that moving the database is a huge task for > us, considering how big it can be in certain cases for us, and it > means control plane outage too I'm pretty sure you were in the room in YVR when we talked about how operators were going to do the database migration and were mostly OK with what was discussed, which was a lot will just copy and take the downtime (I think CERN said around 10 minutes for them, but they aren't a public cloud either), but others might do something more sophisticated and nova shouldn't try to pick the best fit for all. I'm definitely interested in what you do plan to do for the database migration to minimize downtime. +openstack-operators ML since this is an operators discussion now. -- Thanks, Matt From zigo at debian.org Wed Sep 5 15:01:49 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 5 Sep 2018 17:01:49 +0200 Subject: [openstack-dev] better name for placement In-Reply-To: <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> <1536077826-sup-9892@lrrr.local> <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> Message-ID: <21cc68da-552b-2f07-cc57-fcf6ee6ac0fc@debian.org> On 09/04/2018 06:25 PM, Jay Pipes wrote: > On 09/04/2018 12:17 PM, Doug Hellmann wrote: >> Excerpts from Jay Pipes's message of 2018-09-04 12:08:41 -0400: >>> On 09/04/2018 11:44 AM, Doug Hellmann wrote: >>>> Excerpts from Chris Dent's message of 2018-09-04 15:32:12 +0100: >>>>> On Tue, 4 Sep 2018, Jay Pipes wrote: >>>>> >>>>>> Is there a reason we couldn't have openstack-placement be the >>>>>> package name? >>>>> >>>>> I would hope we'd be able to do that, and probably should do that. >>>>> 'openstack-placement' seems a find pypi package name for a think >>>>> from which you do 'import placement' to do some openstack stuff, >>>>> yeah? >>>> >>>> That's still a pretty generic name for the top-level import, but I >>>> think >>>> the only real risk is that the placement service couldn't be installed >>>> at the same time as another package owned by someone else that used >>>> that >>>> top-level name. I'm not sure how much of a risk that really is. >>> >>> You mean if there was another Python package that used the package name >>> "placement"? >>> >>> The alternative would be to make the top-level package something like >>> os_placement instead? > > Either one works for me. Though I'm pretty sure that it isn't necessary. > The reason it isn't necessary is because the stuff in the top-level > placement package isn't meant to be imported by anything at all. In a distro, no 2 package can hold the same file. That's forbidden. This has nothing to do if someone has to "import placemement" or not. Just saying this, but *not* that we should rename (I didn't spot any conflict yet and I understand the pain it would induce). This command returns nothing: apt-file search placement | grep python3/dist-packages/placement Cheers, Thomas Goirand (zigo) From anteaya at anteaya.info Wed Sep 5 15:03:03 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Wed, 5 Sep 2018 11:03:03 -0400 Subject: [openstack-dev] [election] [tc] thank you In-Reply-To: <4671e7de-6155-a61d-1625-5487c7250e32@openstack.org> References: <4671e7de-6155-a61d-1625-5487c7250e32@openstack.org> Message-ID: On 2018-09-05 04:01 AM, Thierry Carrez wrote: > Emilien Macchi wrote: >> I personally feel like some rotation needs to happen A very honourable sentiment, Emilien. I'm so grateful to have spent time working with your very generous spirit. To more such work in the future, Anita From prometheanfire at gentoo.org Wed Sep 5 15:03:09 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 5 Sep 2018 10:03:09 -0500 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][ceilometer] all requirement updates are currently blocked In-Reply-To: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> References: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> Message-ID: <20180905150309.cxstnk6i2sms6pj4@gentoo.org> On 18-08-31 19:52:09, Matthew Thode wrote: > The requirements project has a co-installability test for the various > projects, networking-odl being included. > > Because of the way the dependancy on ceilometer is done it is blocking > all reviews and updates to the requirements project. > > http://logs.openstack.org/96/594496/2/check/requirements-integration/8378cd8/job-output.txt.gz#_2018-08-31_22_54_49_357505 > > If networking-odl is not meant to be used as a library I'd recommend > it's removal from networking-bgpvpn (it's test-requirements.txt file). > Once that is done networking-odl can be removed from global-requirements > and we won't be blocked anymore. > > As a side note, fungi noticed that when you branched you are still > installing ceilometer from master. Also, the ceilometer team > doesnt wish it to be used as a library either (like networking-odl > doesn't wish to be used as a library). > The requirements team has gone ahead and made a aweful hack to get gate unwedged. The commit message is a very good summary of our reasoning why it has to be this way for now. My comment explains our plan going forward (there will be a revert prepared as soon as this merges for instance). step 1. merge this step 2. look into and possibly fix our tooling (why was the gitref addition not rejected by gate) step 3. fix networking-odl (release ceilometer) step 4. unmerge this -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mnaser at vexxhost.com Wed Sep 5 15:03:03 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 5 Sep 2018 11:03:03 -0400 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> Message-ID: On Wed, Sep 5, 2018 at 10:57 AM Matt Riedemann wrote: > > On 9/5/2018 8:47 AM, Mohammed Naser wrote: > > Could placement not do what happened for a while when the nova_api > > database was created? > > Can you be more specific? I'm having a brain fart here and not > remembering what you are referring to with respect to the nova_api DB. I think there was a period in time where the nova_api database was created where entires would try to get pulled out from the original nova database and then checking nova_api if it doesn't exist afterwards (or vice versa). One of the cases that this was done to deal with was for things like instance types or flavours. I don't know the exact details but I know that older instance types exist in the nova db and the newer ones are sitting in nova_api. Something along those lines? > > > > I say this because I know that moving the database is a huge task for > > us, considering how big it can be in certain cases for us, and it > > means control plane outage too > > I'm pretty sure you were in the room in YVR when we talked about how > operators were going to do the database migration and were mostly OK > with what was discussed, which was a lot will just copy and take the > downtime (I think CERN said around 10 minutes for them, but they aren't > a public cloud either), but others might do something more sophisticated > and nova shouldn't try to pick the best fit for all. If we're provided the list of tables used by placement, we could considerably make the downtime smaller because we don't have to pull in the other huge tables like instances/build requests/etc What happens if things like server deletes happen while the placement service is down? > I'm definitely interested in what you do plan to do for the database > migration to minimize downtime. At this point, I'm thinking turn off placement, setup the new one, do the migration of the placement-specific tables (this can be a straightforward documented task OR it would be awesome if it was a placement command (something along the lines of `placement-manage db import_from_nova`) which would import all the right things The idea of having a command would be *extremely* useful for deployment tools in automating the process and it also allows the placement team to selectively decide what they want to onboard? Just throwing ideas here. > +openstack-operators ML since this is an operators discussion now. > > -- > > Thanks, > > Matt -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From rico.lin.guanyu at gmail.com Wed Sep 5 15:12:27 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 5 Sep 2018 23:12:27 +0800 Subject: [openstack-dev] [heat][glance] Heat image resource support issue In-Reply-To: References: Message-ID: On Wed, Sep 5, 2018 at 8:47 PM Brian Rosmaita wrote: Since Queens, Glance has had a 'web-download' import method that takes a > URL [0]. It's enabled by default, but operators do have the ability to > turn it off. (There's an API call to see what methods are enabled in a > particular cloud.) Operators also have the ability to restrict what URLs > are acceptable [1], but that's probably a good thing. > > In short, Glance does have the ability to do what you need since Queens, > but there's no guarantee that it will be available in all clouds and for > all URLs. If you foresee that as a problem, it would be a good idea to get > together with the Glance team at the PTG to discuss this issue. Please add > it as a topic to the Glance PTG planning etherpad [3] as soon as you can. > Cool! Thank Brian. Sounds like something we can use, just one small question in my mind. In order to use `web-download` in image resource, we need to create an empty image than use import to upload that imge. I have try that scenrio by myself now (I'm not really diving into detail yet) by: 1. create an empty image(like `openstack image create --container-format bare --disk-format qcow2 img_name`) 2. and than import image (like `glance image-import --import-method web-download --uri https://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img `) But that image stuck in queued after first step. dose this scenario supported by glance? Or where did I do wrong? > > [0] > https://developer.openstack.org/api-ref/image/v2/index.html#interoperable-image-import > [1] > https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html#configuring-the-web-download-method > [3] https://etherpad.openstack.org/p/stein-ptg-glance-planning > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Sep 5 15:19:23 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 5 Sep 2018 10:19:23 -0500 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> Message-ID: On 9/5/2018 10:03 AM, Mohammed Naser wrote: > On Wed, Sep 5, 2018 at 10:57 AM Matt Riedemann wrote: >> On 9/5/2018 8:47 AM, Mohammed Naser wrote: >>> Could placement not do what happened for a while when the nova_api >>> database was created? >> Can you be more specific? I'm having a brain fart here and not >> remembering what you are referring to with respect to the nova_api DB. > I think there was a period in time where the nova_api database was created > where entires would try to get pulled out from the original nova database and > then checking nova_api if it doesn't exist afterwards (or vice versa). One > of the cases that this was done to deal with was for things like instance types > or flavours. > > I don't know the exact details but I know that older instance types exist in > the nova db and the newer ones are sitting in nova_api. Something along > those lines? Yeah that more about supporting online data migrations *within* nova where new records were created in the API DB and old records would be looked up in both the API DB and then if not found there, in the cell (traditional nova DB). But you'd also be running the "nova-manage db online_data_migrations" CLI to force the migration of the records from the cell DB to the API DB. With Placement split out of nova, we can't really do that. You could point placement at the nova_api DB so it can pull existing records, but it would continue to create new records in the nova_api DB rather than the placement DB and at some point you have to make that data migration. Maybe you were thinking something like have temporary fallback code in placement such that if a record isn't found in the placement database, it queries a configured nova_api database? That'd be a ton of work at this point, and if it was something we were going to do, we should have agreed on that in YVR several months ago, definitely pre-extraction. > >>> I say this because I know that moving the database is a huge task for >>> us, considering how big it can be in certain cases for us, and it >>> means control plane outage too >> I'm pretty sure you were in the room in YVR when we talked about how >> operators were going to do the database migration and were mostly OK >> with what was discussed, which was a lot will just copy and take the >> downtime (I think CERN said around 10 minutes for them, but they aren't >> a public cloud either), but others might do something more sophisticated >> and nova shouldn't try to pick the best fit for all. > If we're provided the list of tables used by placement, we could considerably > make the downtime smaller because we don't have to pull in the other huge > tables like instances/build requests/etc There are no instances records in the API DB, maybe you mean instance_mappings? But yes I get the point. > > What happens if things like server deletes happen while the placement service > is down? The DELETE /allocations/{consumer_id} requests from nova to placement will fail with some keystoneauth1 exception, but because of our old friend @safe_connect we likely won't fail the server delete because we squash the exception from KSA: https://github.com/openstack/nova/blob/0f102089dd0b27c7d35f0cbba87332414032c0a4/nova/scheduler/client/report.py#L2069 However, you'd still have allocations in placement against resource providers (compute nodes) for instances that no longer exist, which means you're available capacity for scheduling new requests is diminished until those bogus allocations are purged from placement, which will take some scripting. In other words, not good things. > >> I'm definitely interested in what you do plan to do for the database >> migration to minimize downtime. > At this point, I'm thinking turn off placement, setup the new one, do > the migration > of the placement-specific tables (this can be a straightforward documented task > OR it would be awesome if it was a placement command (something along > the lines of `placement-manage db import_from_nova`) which would import all > the right things You wouldn't also stop nova-api while doing this? Otherwise you're going to get into the data/resource tracking mess described above which will require some post-migration cleanup scripting. > > The idea of having a command would be*extremely* useful for deployment tools > in automating the process and it also allows the placement team to selectively > decide what they want to onboard? > > Just throwing ideas here. > -- Thanks, Matt From fungi at yuggoth.org Wed Sep 5 15:48:56 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 Sep 2018 15:48:56 +0000 Subject: [openstack-dev] better name for placement In-Reply-To: <21cc68da-552b-2f07-cc57-fcf6ee6ac0fc@debian.org> References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> <1536077826-sup-9892@lrrr.local> <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> <21cc68da-552b-2f07-cc57-fcf6ee6ac0fc@debian.org> Message-ID: <20180905153928.cm4qi37ua33liyf7@yuggoth.org> On 2018-09-05 17:01:49 +0200 (+0200), Thomas Goirand wrote: [...] > In a distro, no 2 package can hold the same file. That's > forbidden. This has nothing to do if someone has to "import > placemement" or not. > > Just saying this, but *not* that we should rename (I didn't spot > any conflict yet and I understand the pain it would induce). This > command returns nothing: > > apt-file search placement | grep python3/dist-packages/placement Well, also since the Placement maintainers have expressed that they aren't interested in making Python API contracts for it to be usable as an importable library, there's probably no need to install its modules into the global Python search path anyway. They could just go into a private module path on the filesystem instead as long as the placement service/entrypoint wrapper knows where to find them, right? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From thomas.morin at orange.com Wed Sep 5 15:50:59 2018 From: thomas.morin at orange.com (thomas.morin at orange.com) Date: Wed, 5 Sep 2018 17:50:59 +0200 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][ceilometer] all requirement updates are currently blocked In-Reply-To: <20180905150309.cxstnk6i2sms6pj4@gentoo.org> References: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> <20180905150309.cxstnk6i2sms6pj4@gentoo.org> Message-ID: <18148_1536162660_5B8FFB64_18148_99_1_99b48efb-f187-e881-bd0f-9412d20e23e5@orange.com> Mathew, networking-odl has now been removed from the requirements of networking-bgpvpn [1], on master, so networking-odl could be removed from requirements. This is not the case on stable branches, though. -Thomas [1] https://review.openstack.org/#/c/599422/ On 05/09/2018 17:03, Matthew Thode wrote: > On 18-08-31 19:52:09, Matthew Thode wrote: >> The requirements project has a co-installability test for the various >> projects, networking-odl being included. >> >> Because of the way the dependancy on ceilometer is done it is blocking >> all reviews and updates to the requirements project. >> >> http://logs.openstack.org/96/594496/2/check/requirements-integration/8378cd8/job-output.txt.gz#_2018-08-31_22_54_49_357505 >> >> If networking-odl is not meant to be used as a library I'd recommend >> it's removal from networking-bgpvpn (it's test-requirements.txt file). >> Once that is done networking-odl can be removed from global-requirements >> and we won't be blocked anymore. >> >> As a side note, fungi noticed that when you branched you are still >> installing ceilometer from master. Also, the ceilometer team >> doesnt wish it to be used as a library either (like networking-odl >> doesn't wish to be used as a library). >> > The requirements team has gone ahead and made a aweful hack to get gate > unwedged. The commit message is a very good summary of our reasoning > why it has to be this way for now. My comment explains our plan going > forward (there will be a revert prepared as soon as this merges for > instance). > > step 1. merge this > step 2. look into and possibly fix our tooling (why was the gitref addition not rejected by gate) > step 3. fix networking-odl (release ceilometer) > step 4. unmerge this > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. From jaypipes at gmail.com Wed Sep 5 15:52:08 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 5 Sep 2018 11:52:08 -0400 Subject: [openstack-dev] better name for placement In-Reply-To: <20180905153928.cm4qi37ua33liyf7@yuggoth.org> References: <2fcd9d03-6d26-8b48-d55f-7b86e0a0d287@debian.org> <20180904133741.eetizhx4rksarmg7@yuggoth.org> <1536075775-sup-7652@lrrr.local> <1536077826-sup-9892@lrrr.local> <24208112-5803-43f9-72e9-77a31ca7374f@gmail.com> <21cc68da-552b-2f07-cc57-fcf6ee6ac0fc@debian.org> <20180905153928.cm4qi37ua33liyf7@yuggoth.org> Message-ID: On 09/05/2018 11:48 AM, Jeremy Stanley wrote: > On 2018-09-05 17:01:49 +0200 (+0200), Thomas Goirand wrote: > [...] >> In a distro, no 2 package can hold the same file. That's >> forbidden. This has nothing to do if someone has to "import >> placemement" or not. >> >> Just saying this, but *not* that we should rename (I didn't spot >> any conflict yet and I understand the pain it would induce). This >> command returns nothing: >> >> apt-file search placement | grep python3/dist-packages/placement > > Well, also since the Placement maintainers have expressed that they > aren't interested in making Python API contracts for it to be usable > as an importable library, there's probably no need to install its > modules into the global Python search path anyway. They could just > go into a private module path on the filesystem instead as long as > the placement service/entrypoint wrapper knows where to find them, > right? Yep. -jay From prometheanfire at gentoo.org Wed Sep 5 15:58:49 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 5 Sep 2018 10:58:49 -0500 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][ceilometer] all requirement updates are currently blocked In-Reply-To: <18148_1536162660_5B8FFB64_18148_99_1_99b48efb-f187-e881-bd0f-9412d20e23e5@orange.com> References: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> <20180905150309.cxstnk6i2sms6pj4@gentoo.org> <18148_1536162660_5B8FFB64_18148_99_1_99b48efb-f187-e881-bd0f-9412d20e23e5@orange.com> Message-ID: <20180905155849.sxnr3bgvmp6uvgso@gentoo.org> On 18-09-05 17:50:59, thomas.morin at orange.com wrote: > Mathew, > > networking-odl has now been removed from the requirements of > networking-bgpvpn [1], on master, so networking-odl could be removed from > requirements. > > This is not the case on stable branches, though. > > -Thomas > > [1] https://review.openstack.org/#/c/599422/ > > On 05/09/2018 17:03, Matthew Thode wrote: > > On 18-08-31 19:52:09, Matthew Thode wrote: > > > The requirements project has a co-installability test for the various > > > projects, networking-odl being included. > > > > > > Because of the way the dependancy on ceilometer is done it is blocking > > > all reviews and updates to the requirements project. > > > > > > http://logs.openstack.org/96/594496/2/check/requirements-integration/8378cd8/job-output.txt.gz#_2018-08-31_22_54_49_357505 > > > > > > If networking-odl is not meant to be used as a library I'd recommend > > > it's removal from networking-bgpvpn (it's test-requirements.txt file). > > > Once that is done networking-odl can be removed from global-requirements > > > and we won't be blocked anymore. > > > > > > As a side note, fungi noticed that when you branched you are still > > > installing ceilometer from master. Also, the ceilometer team > > > doesnt wish it to be used as a library either (like networking-odl > > > doesn't wish to be used as a library). > > > > > The requirements team has gone ahead and made a aweful hack to get gate > > unwedged. The commit message is a very good summary of our reasoning > > why it has to be this way for now. My comment explains our plan going > > forward (there will be a revert prepared as soon as this merges for > > instance). > > > > step 1. merge this > > step 2. look into and possibly fix our tooling (why was the gitref addition not rejected by gate) > > step 3. fix networking-odl (release ceilometer) > > step 4. unmerge this > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _________________________________________________________________________________________________________________________ > > Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, > Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. > > This message and its attachments may contain confidential or privileged information that may be protected by law; > they should not be distributed, used or copied without authorisation. > If you have received this email in error, please notify the sender and delete this message and its attachments. > As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. > Thank you. > Yep, we discussed doing that (and it's still an option). We decided to do something a bit more verbose though and have a plan. Just need to get ceilometer to release to pypi... -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From amy at demarco.com Wed Sep 5 16:02:37 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 5 Sep 2018 09:02:37 -0700 Subject: [openstack-dev] [election] [tc] thank you In-Reply-To: References: <4671e7de-6155-a61d-1625-5487c7250e32@openstack.org> Message-ID: Emilien, Thank you so much for all you've done as part of the TC and continue todo within the community! Amy (spotz) On Wed, Sep 5, 2018 at 8:03 AM, Anita Kuno wrote: > On 2018-09-05 04:01 AM, Thierry Carrez wrote: > >> Emilien Macchi wrote: >> > > I personally feel like some rotation needs to happen >>> >> > A very honourable sentiment, Emilien. I'm so grateful to have spent time > working with your very generous spirit. > > To more such work in the future, Anita > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Sep 5 16:10:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 05 Sep 2018 12:10:41 -0400 Subject: [openstack-dev] [all]-ish : Updates required for readthedocs publishers In-Reply-To: <7306a7a3-e42f-22d0-c229-0ce74e1cb2e4@redhat.com> References: <7306a7a3-e42f-22d0-c229-0ce74e1cb2e4@redhat.com> Message-ID: <1536163649-sup-7521@lrrr.local> Excerpts from Ian Wienand's message of 2018-09-05 18:53:10 +1000: > Hello, > > If you're interested in the projects mentioned below, you may have > noticed a new, failing, non-voting job > "your-readthedocs-job-requires-attention". Spoiler alert: your > readthedocs job requires attention. It's easy to miss because > publishing happens in the post pipeline and people don't often look > at the results of these jobs. > > Please see the prior email on this > > http://lists.openstack.org/pipermail/openstack-dev/2018-August/132836.html Those instructions and the ones linked at https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-docs-on-readthedocs say to "generate a web hook URL". RTD offers me 4 types of webhooks (github, bitbucket, gitlab, generic). Which type do we need for our CI? "generic"? What is the "ID" of the webhook? The number at the end of the URL, or the token associated with it? Doug > > for what to do (if you read the failing job logs, it also points you > to this). > > I (or #openstack-infra) can help, but only once the openstackci user > is given permissions to the RTD project by its current owner. > > Thanks, > > -i > > The following projects have this job now: > > openstack-infra/gear > openstack/airship-armada > openstack/almanach > openstack/ansible-role-bindep > openstack/ansible-role-cloud-launcher > openstack/ansible-role-diskimage-builder > openstack/ansible-role-cloud-fedmsg > openstack/ansible-role-cloud-gearman > openstack/ansible-role-jenkins-job-builder > openstack/ansible-role-logrotate > openstack/ansible-role-ngix > openstack/ansible-role-nodepool > openstack/ansible-role-openstacksdk > openstack/ansible-role-shade > openstack/ansible-role-ssh > openstack/ansible-role-sudoers > openstack/ansible-role-virtualenv > openstack/ansible-role-zookeeper > openstack/ansible-role-zuul > openstack/ara > openstack/bareon > openstack/bareon-allocator > openstack/bareon-api > openstack/bareon-ironic > openstack/browbeat > openstack/downpour > openstack/fuel-ccp > openstack/fuel-ccp-installer > openstack/fuel-noop-fixtures > openstack/ironic-staging-drivers > openstack/k8s-docker-suite-app-murano > openstack/kloudbuster > openstack/nerd-reviewer > openstack/networking-dpm > openstack/nova-dpm > openstack/ooi > openstack/os-faults > openstack/packetary > openstack/packetary-specs > openstack/performa > openstack/poppy > openstack/python-almanachclient > openstack/python-jenkins > openstack/rally > openstack/solar > openstack/sqlalchemy-migrate > openstack/stackalytics > openstack/surveil > openstack/swauth > openstack/turbo-hipster > openstack/virtualpdu > openstack/vmtp > openstack/windmill > openstack/yaql > From dtantsur at redhat.com Wed Sep 5 16:17:46 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 5 Sep 2018 18:17:46 +0200 Subject: [openstack-dev] [all]-ish : Updates required for readthedocs publishers In-Reply-To: <1536163649-sup-7521@lrrr.local> References: <7306a7a3-e42f-22d0-c229-0ce74e1cb2e4@redhat.com> <1536163649-sup-7521@lrrr.local> Message-ID: <5313a6e2-91c2-937a-e84b-45e89f726810@redhat.com> On 09/05/2018 06:10 PM, Doug Hellmann wrote: > Excerpts from Ian Wienand's message of 2018-09-05 18:53:10 +1000: >> Hello, >> >> If you're interested in the projects mentioned below, you may have >> noticed a new, failing, non-voting job >> "your-readthedocs-job-requires-attention". Spoiler alert: your >> readthedocs job requires attention. It's easy to miss because >> publishing happens in the post pipeline and people don't often look >> at the results of these jobs. >> >> Please see the prior email on this >> >> http://lists.openstack.org/pipermail/openstack-dev/2018-August/132836.html > > Those instructions and the ones linked at > https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-docs-on-readthedocs > say to "generate a web hook URL". RTD offers me 4 types of webhooks > (github, bitbucket, gitlab, generic). Which type do we need for our > CI? "generic"? The generic one. > > What is the "ID" of the webhook? The number at the end of the URL, or > the token associated with it? The number, like here: https://github.com/openstack/metalsmith/blob/master/.zuul.yaml#L157 > > Doug > >> >> for what to do (if you read the failing job logs, it also points you >> to this). >> >> I (or #openstack-infra) can help, but only once the openstackci user >> is given permissions to the RTD project by its current owner. >> >> Thanks, >> >> -i >> >> The following projects have this job now: >> >> openstack-infra/gear >> openstack/airship-armada >> openstack/almanach >> openstack/ansible-role-bindep >> openstack/ansible-role-cloud-launcher >> openstack/ansible-role-diskimage-builder >> openstack/ansible-role-cloud-fedmsg >> openstack/ansible-role-cloud-gearman >> openstack/ansible-role-jenkins-job-builder >> openstack/ansible-role-logrotate >> openstack/ansible-role-ngix >> openstack/ansible-role-nodepool >> openstack/ansible-role-openstacksdk >> openstack/ansible-role-shade >> openstack/ansible-role-ssh >> openstack/ansible-role-sudoers >> openstack/ansible-role-virtualenv >> openstack/ansible-role-zookeeper >> openstack/ansible-role-zuul >> openstack/ara >> openstack/bareon >> openstack/bareon-allocator >> openstack/bareon-api >> openstack/bareon-ironic >> openstack/browbeat >> openstack/downpour >> openstack/fuel-ccp >> openstack/fuel-ccp-installer >> openstack/fuel-noop-fixtures >> openstack/ironic-staging-drivers >> openstack/k8s-docker-suite-app-murano >> openstack/kloudbuster >> openstack/nerd-reviewer >> openstack/networking-dpm >> openstack/nova-dpm >> openstack/ooi >> openstack/os-faults >> openstack/packetary >> openstack/packetary-specs >> openstack/performa >> openstack/poppy >> openstack/python-almanachclient >> openstack/python-jenkins >> openstack/rally >> openstack/solar >> openstack/sqlalchemy-migrate >> openstack/stackalytics >> openstack/surveil >> openstack/swauth >> openstack/turbo-hipster >> openstack/virtualpdu >> openstack/vmtp >> openstack/windmill >> openstack/yaql >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dms at danplanet.com Wed Sep 5 16:41:31 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 05 Sep 2018 09:41:31 -0700 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: (Mohammed Naser's message of "Wed, 5 Sep 2018 11:03:03 -0400") References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> Message-ID: > I think there was a period in time where the nova_api database was created > where entires would try to get pulled out from the original nova database and > then checking nova_api if it doesn't exist afterwards (or vice versa). One > of the cases that this was done to deal with was for things like instance types > or flavours. > > I don't know the exact details but I know that older instance types exist in > the nova db and the newer ones are sitting in nova_api. Something along > those lines? Yep, we've moved entire databases before in nova with minimal disruption to the users. Not just flavors, but several pieces of data came out of the "main" database and into the api database transparently. It's doable, but with placement being split to a separate project/repo/whatever, there's not really any option for being graceful about it in this case. > At this point, I'm thinking turn off placement, setup the new one, do > the migration > of the placement-specific tables (this can be a straightforward documented task > OR it would be awesome if it was a placement command (something along > the lines of `placement-manage db import_from_nova`) which would import all > the right things > > The idea of having a command would be *extremely* useful for deployment tools > in automating the process and it also allows the placement team to selectively > decide what they want to onboard? Well, it's pretty cut-and-dried as all the tables in nova-api are either for nova or placement, so there's not much confusion about what belongs. I'm not sure that doing this import in python is really the most efficient way. I agree a placement-manage command would be ideal from an "easy button" point of view, but I think a couple lines of bash that call mysqldump are likely to vastly outperform us doing it natively in python. We could script exec()s of those commands from python, but.. I think I'd rather just see that as a shell script that people can easily alter/test on their own. Just curious, but in your case would the service catalog entry change at all? If you stand up the new placement in the exact same spot, it shouldn't, but I imagine some people will have the catalog entry change slightly (even if just because of a VIP or port change). Am I remembering correctly that the catalog can get cached in various places such that much of nova would need a restart to notice? --Dan From whayutin at redhat.com Wed Sep 5 16:58:11 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 5 Sep 2018 10:58:11 -0600 Subject: [openstack-dev] [tripleo] quickstart for humans In-Reply-To: References: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> Message-ID: On Fri, Aug 31, 2018 at 7:12 AM Steven Hardy wrote: > On Thu, Aug 30, 2018 at 3:28 PM, Honza Pokorny wrote: > > Hello! > > > > Over the last few months, it seems that tripleo-quickstart has evolved > > into a CI tool. It's primarily used by computers, and not humans. > > tripleo-quickstart is a helpful set of ansible playbooks, and a > > collection of feature sets. However, it's become less useful for > > setting up development environments by humans. For example, devmode.sh > > was recently deprecated without a user-friendly replacement. Moreover, > > during some informal irc conversations in #oooq, some developers even > > mentioned the plan to merge tripleo-quickstart and tripleo-ci. > > I was recently directed to the reproducer-quickstart.sh script that's > written in the logs directory for all oooq CI jobs - does that help as > a replacement for the previous devmode interface? > > Not that familiar with it myself but it seems to target many of the > use-cases you mention e.g uniform reproducer for issues, potentially > quicker way to replicate CI results? > > Steve > > Thanks Honza and Steve for sharing. Steve is correctly pointing out that reproducer scripts [1] are the upgraded version of what was known as devmode. There are two main goals we are trying to achieve as a CI team with regards reproducing CI. A. Ensure that a developer can reproduce what is executed upstream step by step as closely as is possible to deliver a 1:1 matching result B. Ensure the reliability of the local run is as close to the reliability of the upstream check job as possible. The older devmode scripts did a rather poor job at both A and B, where the reproducer_script will actually execute the upstream CI workflow once an environment is provisioned. The results should be identical as long as there are no yum, or other network related issues. CI is a very opinionated realm of work, a point that Jirka makes quite well. We have to focus on goals that are clearly defined. The long term goal is make TripleO very easy to use and deploy, not just make tripleo-quickstart easy to use. The TripleO CI team is happy to help Honza or Jason stand up a tripleo job against the tripleo-ui repo. At which point you should have something testing your changes and the scripts and tools to reproduce that job. I never like to see an upstream repo w/o any real CI running against it. Thanks [1] https://docs.openstack.org/tripleo-docs/latest/contributor/reproduce-ci.html > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From knikolla at bu.edu Wed Sep 5 18:20:46 2018 From: knikolla at bu.edu (Kristi Nikolla) Date: Wed, 5 Sep 2018 14:20:46 -0400 Subject: [openstack-dev] [election] [tc] TC Candidacy Message-ID: Hi all, I’m stepping out of the comfort zone and submitting my candidacy for a seat on the OpenStack Technical Committee. I’m a software architect at the Mass Open Cloud[0], a collaboration between the five major universities of the Boston area to create and operate a public cloud based on the Open Cloud eXchange model[1]. I’ve been involved with OpenStack for the past three years as a user, operator and developer. My main area of focus is in identity and federation. I’m a core reviewer for the Keystone project and the lead for MixMatch[2][3][4], which aims to enable resource federation among multiple OpenStack deployments. I believe my affiliation with academia and research will bring a different voice to the technical committee from a mostly underrepresented group. Furthermore, my experience with federation and operating a cloud with a diverse set of offerings not limited to OpenStack, but including other important pieces of a cloud provider’s toolbox will prove really valuable, especially with the vision dilemmas that OpenStack is facing today. I’m really excited to have the opportunity to take part in the discussion with regards to the technical vision for OpenStack. Regardless of election outcome, this is the first step towards a larger involvement from me in the important discussions (no more shying away from the important mailing list threads.) I have a lot yet to learn, and consider it a big privilege to be surrounded by so many kind people who have mentored me and continue to mentor me while I walk and stumble. I strongly believe in servant leadership, and I will devote myself in helping the community and mentoring the next wave of OpenStack contributors. I have found OpenStack to be one of the most welcoming online communities, and am very proud to be a part of this big family and for a chance to give back. Thank you for your time and attention, Kristi Nikolla (knikolla) [0]. https://massopen.cloud [1]. http://www.cs.bu.edu/fac/best/res/papers/ic14.pdf [2]. https://mixmatch.readthedocs.io [3]. https://github.com/openstack/mixmatch [4]. https://youtu.be/O0euqskJJ_8 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From melwittt at gmail.com Wed Sep 5 19:23:16 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 5 Sep 2018 12:23:16 -0700 Subject: [openstack-dev] [nova] No weekly meeting on Thursday September 13 In-Reply-To: <8b3eac26-9003-a6cb-bcc8-920ef903efcc@gmail.com> References: <8b3eac26-9003-a6cb-bcc8-920ef903efcc@gmail.com> Message-ID: On Tue, 4 Sep 2018 20:36:45 -0500, Matt Riedemann wrote: > On 9/4/2018 4:13 PM, melanie witt wrote: >> The next meeting will be on Thursday September 20 at 1400 UTC [1]. > I'm assuming we're going to have a meeting*this* week though, right? Yes, sorry if I worded that in a confusing way. We have a meeting tomorrow September 6 at 1400 UTC, we will _not_ meet during PTG week on Thursday September 13, and then we resume meeting on Thursday September 20 at 1400 UTC. -melanie From samuel at cassi.ba Wed Sep 5 19:49:25 2018 From: samuel at cassi.ba (Samuel Cassiba) Date: Wed, 5 Sep 2018 12:49:25 -0700 Subject: [openstack-dev] [election] [tc] TC candidacy Message-ID: Hello everybody, I am announcing my candidacy to be a member of the OpenStack Technical Committee (TC). I have been involved in open source since I was a brash youth on the Internet in the late 1990s, which amounts to over half my life at this point. I am a self-taught individual, cutting my teeth on BSDs of the period. I operated in that area for a number of years, becoming a 'shadow' maintainer under various pseudonyms. As time progressed, I became comfortable attributing my work to my personal identity. o/ My direct involvement with OpenStack began during the Folsom release, as an operator and deployer. I focused my efforts on automation, eventually falling in with a crowd that likes puns and cooking references. In my professional life, I have served as developer, operator, user, and architect, which extends back to the birthplace of OpenStack. I am a founding member of Chef OpenStack[0], where I have dutifully served as PTL for five releases. My community involvement also extends outside the OpenStack ecosystem, where I serve as a member of Sous Chefs[1], a group dedicated to the long-term care of critical Chef community resources. Though my hands-on experience goes back several releases, I still view things from the outside-looking-in perspective. Having the outsider lens is crucial in the long-term for any consensus-driven group, regardless of that consensus. Regardless of the election outcome, this is me taking steps to having a larger involvement in the overall conversations that drive so much of our daily lives. At the end of the day, we're all just groups of people trying to do our jobs. I view this as an opportunity to give back to a community that has given me so much. Thank you for your attention and consideration, Samuel Cassiba (scas) [0] https://docs.openstack.org/openstack-chef/latest/ [1] https://sous-chefs.org/ From Kent.Gordon at VerizonWireless.com Wed Sep 5 19:54:32 2018 From: Kent.Gordon at VerizonWireless.com (Gordon, Kent) Date: Wed, 5 Sep 2018 19:54:32 +0000 Subject: [openstack-dev] [E] [tripleo] quickstart for humans In-Reply-To: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> References: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> Message-ID: <4c102593131d458c811f3668e695782d@scwexch19apd.uswin.ad.vzwcorp.com> > -----Original Message----- > From: Honza Pokorny [mailto:honza at redhat.com] > Sent: Thursday, August 30, 2018 9:28 AM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: [E] [openstack-dev] [tripleo] quickstart for humans > > Hello! > > Over the last few months, it seems that tripleo-quickstart has evolved into a > CI tool. It's primarily used by computers, and not humans. > tripleo-quickstart is a helpful set of ansible playbooks, and a collection of > feature sets. However, it's become less useful for setting up development > environments by humans. For example, devmode.sh was recently > deprecated without a user-friendly replacement. Moreover, during some > informal irc conversations in #oooq, some developers even mentioned the > plan to merge tripleo-quickstart and tripleo-ci. > > I think it would be beneficial to create a set of defaults for tripleo-quickstart > that can be used to spin up new environments; a set of defaults for humans. > This can either be a well-maintained script in tripleo-quickstart itself, or a > brand new project, e.g. > tripleo-quickstart-humans. The number of settings, knobs, and flags should > be kept to a minimum. > > This would accomplish two goals: > > 1. It would bring uniformity to the team. Each environment is > installed the same way. When something goes wrong, we can > eliminate differences in setup when debugging. This should save a > lot of time. > > 2. Quicker and more reliable environment setup. If the set of defaults > is used by many people, it should container fewer bugs because more > people using something should translate into more bug reports, and > more bug fixes. > > These thoughts are coming from the context of tripleo-ui development. I > need an environment in order to develop, but I don't necessarily always care > about how it's installed. I want something that works for most scenarios. > > What do you think? Does this make sense? Does something like this already > exist? > > Thanks for listening! > > Honza What is the recommended way to bring up a small POC of TripleO outside of CI? Documentation suggests using quickstart https://docs.openstack.org/tripleo-docs/latest/install/introduction/architecture.html "For development or proof of concept (PoC) environments, Quickstart can also be used." Quickstart.sh outside of CI has been broken for a while. It requires zuul cloner to work. https://bugs.launchpad.net/tripleo/+bug/1754498 From whayutin at redhat.com Wed Sep 5 21:08:35 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 5 Sep 2018 15:08:35 -0600 Subject: [openstack-dev] [E] [tripleo] quickstart for humans In-Reply-To: <4c102593131d458c811f3668e695782d@scwexch19apd.uswin.ad.vzwcorp.com> References: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> <4c102593131d458c811f3668e695782d@scwexch19apd.uswin.ad.vzwcorp.com> Message-ID: On Wed, Sep 5, 2018 at 3:55 PM Gordon, Kent wrote: > > > > -----Original Message----- > > From: Honza Pokorny [mailto:honza at redhat.com] > > Sent: Thursday, August 30, 2018 9:28 AM > > To: OpenStack Development Mailing List (not for usage questions) > > > > Subject: [E] [openstack-dev] [tripleo] quickstart for humans > > > > Hello! > > > > Over the last few months, it seems that tripleo-quickstart has evolved > into a > > CI tool. It's primarily used by computers, and not humans. > > tripleo-quickstart is a helpful set of ansible playbooks, and a > collection of > > feature sets. However, it's become less useful for setting up > development > > environments by humans. For example, devmode.sh was recently > > deprecated without a user-friendly replacement. Moreover, during some > > informal irc conversations in #oooq, some developers even mentioned the > > plan to merge tripleo-quickstart and tripleo-ci. > > > > I think it would be beneficial to create a set of defaults for > tripleo-quickstart > > that can be used to spin up new environments; a set of defaults for > humans. > > This can either be a well-maintained script in tripleo-quickstart > itself, or a > > brand new project, e.g. > > tripleo-quickstart-humans. The number of settings, knobs, and flags > should > > be kept to a minimum. > > > > This would accomplish two goals: > > > > 1. It would bring uniformity to the team. Each environment is > > installed the same way. When something goes wrong, we can > > eliminate differences in setup when debugging. This should save a > > lot of time. > > > > 2. Quicker and more reliable environment setup. If the set of defaults > > is used by many people, it should container fewer bugs because more > > people using something should translate into more bug reports, and > > more bug fixes. > > > > These thoughts are coming from the context of tripleo-ui development. I > > need an environment in order to develop, but I don't necessarily always > care > > about how it's installed. I want something that works for most > scenarios. > > > > What do you think? Does this make sense? Does something like this > already > > exist? > > > > Thanks for listening! > > > > Honza > > What is the recommended way to bring up a small POC of TripleO outside of > CI? > > Documentation suggests using quickstart > > > https://docs.openstack.org/tripleo-docs/latest/install/introduction/architecture.html > > "For development or proof of concept (PoC) environments, Quickstart can > also be used." > > Quickstart.sh outside of CI has been broken for a while. > It requires zuul cloner to work. > > https://bugs.launchpad.net/tripleo/+bug/1754498 > > The issue described in bug [1] was caused by pip requirement install errors being swallowed up and not written to the console. TripleO-QuickStart-Extras was not pip installed due to previous errors, and that would cause quickstart-extras.yml to not be installed. The root cause of the failures is that pip install dependencies are not working as expected or in the same way without a http proxy server. This bug [1] should be closed, a RFE bug to ensure things work w/ a http proxy server should be opened. Please let me know if your work proves otherwise. Thank you! [1] https://bugs.launchpad.net/tripleo/+bug/1754498 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Sep 5 21:40:35 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 5 Sep 2018 16:40:35 -0500 Subject: [openstack-dev] [glance][keystone][nova][ironic][tripleo][edge][airship] Edge related joint sessions at the PTG next week Message-ID: Hi, The PTG is approaching quickly and as we are planning a couple of joint sessions on edge computing with a couple of OpenStack project teams I would like to give a quick summary of the plans. We have the Edge Computing Group meeting all day Tuesday in Ballroom A[1] from 9am MDT. The group is planning a use cases and requirements discussion in the morning and collaborative sessions with Glance, Keystone, Nova, Airship, TripleO and StarlingX throughout the day. Furthermore people from the group will participate in the relevant session on the Ironic agenda on Thursday. For the full agenda and session topics please see the following etherpad: https://etherpad.openstack.org/p/EdgeComputingGroupPTG4 The StarlingX project is meeting all day Wednesday in Winter Park[1] starting at 9am MDT. Their topics include discussions on initial governance and release planning and further technical topics concerning architecture, build and deployment. The StarlingX team will participate in the cross-project sessions on Tuesday listed on the Edge Group etherpad for cross-project discussions. For the full agenda of StarlingX sessions please see the following etherpad: https://etherpad.openstack.org/p/stx-PTG-agenda Both groups are planning to provide remote participation option for their sessions, the information will be provided on the etherpads above before the sessions start. Please let me know if you have any questions. Thanks and Best Regards, Ildikó (IRC: ildikov) [1] https://web14.openstack.org/assets/ptg/Denver-map.pdf From billy.olsen at gmail.com Wed Sep 5 21:48:11 2018 From: billy.olsen at gmail.com (Billy Olsen) Date: Wed, 5 Sep 2018 14:48:11 -0700 Subject: [openstack-dev] [charms] Propose Felipe Reyes for OpenStack Charmers team Message-ID: <5157f326-5422-6a76-efcd-a80439e5d778@gmail.com> Hi, I'd like to propose Felipe Reyes to join the OpenStack Charmers team as a core member. Over the past couple of years Felipe has contributed numerous patches and reviews to the OpenStack charms [0]. His experience and knowledge of the charms used in OpenStack and the usage of Juju make him a great candidate. [0] - https://review.openstack.org/#/q/owner:%22Felipe+Reyes+%253Cfelipe.reyes%2540canonical.com%253E%22 Thanks, Billy Olsen From ildiko.vancsa at gmail.com Wed Sep 5 21:48:46 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 5 Sep 2018 16:48:46 -0500 Subject: [openstack-dev] [Openstack-operators] [ironic][tripleo][edge] Discussing ironic federation and distributed deployments In-Reply-To: References: <61f07d29-185b-7f9a-b0a8-311272c4fd4d@redhat.com> Message-ID: <005FAF9F-32CD-4D75-9D7B-887B55868F1F@gmail.com> Hi, I mentioned a non-official project on the call we had this week for IoT (bare metal) management, called IoTronic. It is developed in connection to a Smart Cities use case, the name of the project is Stack4Things under the #SmartME umbrella project, mostly revolving around Single-Board Computers (Raspberrys, Arduinos, etc) as full-blown (far) edge nodes. IoTronic at this point is interoperating/integrated with Horizon/Keystone/Neutron. You can find information on the following link about the umbrella projects and IoTronic: Latest talk about Stack4Things/IoTronic, at the (Vancouver) Summit (includes link to video): https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21201/an-edge-computing-case-study-for-monasca-smart-city-ai-powered-surveillance-and-monitoring Official Stack4Things page (updates in progress, with high-level documentation, papers, etc) http://stack4things.unime.it/ #SmartME Smart City (Messina) portal http://smartme.unime.it/ #SmartME Open Data portal http://smartme-data.unime.it OpenStack Infra-hosted repos for IoTronic http://git.openstack.org/cgit/openstack/iotronic http://git.openstack.org/cgit/openstack/iotronic-lightning-rod http://git.openstack.org/cgit/openstack/iotronic-ui http://git.openstack.org/cgit/openstack/python-iotronicclient Thanks, Ildikó > On 2018. Aug 31., at 6:50, Emilien Macchi wrote: > > On Fri, Aug 31, 2018 at 4:42 AM Dmitry Tantsur wrote: > This is about a call a week before the PTG, not the PTG itself. You're still > very welcome to join! > > It's good too! Our TripleO IRC meeting is at 14 UTC. > > Thanks, > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From whayutin at redhat.com Wed Sep 5 21:52:33 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 5 Sep 2018 15:52:33 -0600 Subject: [openstack-dev] [E] [tripleo] quickstart for humans In-Reply-To: References: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> <4c102593131d458c811f3668e695782d@scwexch19apd.uswin.ad.vzwcorp.com> Message-ID: On Wed, Sep 5, 2018 at 5:08 PM Wesley Hayutin wrote: > On Wed, Sep 5, 2018 at 3:55 PM Gordon, Kent < > Kent.Gordon at verizonwireless.com> wrote: > >> >> >> > -----Original Message----- >> > From: Honza Pokorny [mailto:honza at redhat.com] >> > Sent: Thursday, August 30, 2018 9:28 AM >> > To: OpenStack Development Mailing List (not for usage questions) >> > >> > Subject: [E] [openstack-dev] [tripleo] quickstart for humans >> > >> > Hello! >> > >> > Over the last few months, it seems that tripleo-quickstart has evolved >> into a >> > CI tool. It's primarily used by computers, and not humans. >> > tripleo-quickstart is a helpful set of ansible playbooks, and a >> collection of >> > feature sets. However, it's become less useful for setting up >> development >> > environments by humans. For example, devmode.sh was recently >> > deprecated without a user-friendly replacement. Moreover, during some >> > informal irc conversations in #oooq, some developers even mentioned the >> > plan to merge tripleo-quickstart and tripleo-ci. >> > >> > I think it would be beneficial to create a set of defaults for >> tripleo-quickstart >> > that can be used to spin up new environments; a set of defaults for >> humans. >> > This can either be a well-maintained script in tripleo-quickstart >> itself, or a >> > brand new project, e.g. >> > tripleo-quickstart-humans. The number of settings, knobs, and flags >> should >> > be kept to a minimum. >> > >> > This would accomplish two goals: >> > >> > 1. It would bring uniformity to the team. Each environment is >> > installed the same way. When something goes wrong, we can >> > eliminate differences in setup when debugging. This should save a >> > lot of time. >> > >> > 2. Quicker and more reliable environment setup. If the set of defaults >> > is used by many people, it should container fewer bugs because more >> > people using something should translate into more bug reports, and >> > more bug fixes. >> > >> > These thoughts are coming from the context of tripleo-ui development. I >> > need an environment in order to develop, but I don't necessarily always >> care >> > about how it's installed. I want something that works for most >> scenarios. >> > >> > What do you think? Does this make sense? Does something like this >> already >> > exist? >> > >> > Thanks for listening! >> > >> > Honza >> >> What is the recommended way to bring up a small POC of TripleO outside of >> CI? >> >> Documentation suggests using quickstart >> >> >> https://docs.openstack.org/tripleo-docs/latest/install/introduction/architecture.html >> >> "For development or proof of concept (PoC) environments, Quickstart can >> also be used." >> >> Quickstart.sh outside of CI has been broken for a while. >> It requires zuul cloner to work. >> >> https://bugs.launchpad.net/tripleo/+bug/1754498 >> >> > The issue described in bug [1] was caused by pip requirement install > errors being swallowed up and not written to the console. > TripleO-QuickStart-Extras was not pip installed due to previous errors, and > that would cause quickstart-extras.yml to not be installed. > > The root cause of the failures is that pip install dependencies are not > working as expected or in the same way without a http proxy server. This > bug [1] should be closed, a RFE bug to ensure things work w/ a http proxy > server should be opened. > > Please let me know if your work proves otherwise. > Thank you! > > [1] https://bugs.launchpad.net/tripleo/+bug/1754498 > > > I just launched an install with the following. export WD=/var/tmp/test; ./quickstart.sh --no-clone --release tripleo-ci/master --tags all --clean --teardown all -w $WD whayutin-testbox Where whayutin-testbox is my remote testbox, everything is working well atm however there may be an issue w/ the bmc [1] [1] https://bugs.launchpad.net/tripleo/+bug/1790969 > > > >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- > > Wes Hayutin > > Associate MANAGER > > Red Hat > > > > whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay > > > View my calendar and check my availability for meetings HERE > > -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Wed Sep 5 22:06:21 2018 From: abishop at redhat.com (Alan Bishop) Date: Wed, 5 Sep 2018 18:06:21 -0400 Subject: [openstack-dev] [glance][keystone][nova][ironic][tripleo][edge][airship] Edge related joint sessions at the PTG next week In-Reply-To: References: Message-ID: On Wed, Sep 5, 2018 at 5:41 PM Ildiko Vancsa wrote: > Hi, > > The PTG is approaching quickly and as we are planning a couple of joint > sessions on edge computing with a couple of OpenStack project teams I would > like to give a quick summary of the plans. > > We have the Edge Computing Group meeting all day Tuesday in Ballroom A[1] > from 9am MDT. The group is planning a use cases and requirements discussion > in the morning and collaborative sessions with Glance, Keystone, Nova, > Airship, TripleO and StarlingX throughout the day. Furthermore people from > the group will participate in the relevant session on the Ironic agenda on > Thursday. > Thanks, Ildiko! I highly encourage members from other storage communities such as cinder and manila to consider attending (I will be). Alan > For the full agenda and session topics please see the following etherpad: > https://etherpad.openstack.org/p/EdgeComputingGroupPTG4 > > The StarlingX project is meeting all day Wednesday in Winter Park[1] > starting at 9am MDT. Their topics include discussions on initial governance > and release planning and further technical topics concerning architecture, > build and deployment. The StarlingX team will participate in the > cross-project sessions on Tuesday listed on the Edge Group etherpad for > cross-project discussions. > > For the full agenda of StarlingX sessions please see the following > etherpad: https://etherpad.openstack.org/p/stx-PTG-agenda > > Both groups are planning to provide remote participation option for their > sessions, the information will be provided on the etherpads above before > the sessions start. > > Please let me know if you have any questions. > > Thanks and Best Regards, > Ildikó > (IRC: ildikov) > > [1] https://web14.openstack.org/assets/ptg/Denver-map.pdf > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Sep 5 23:07:09 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 5 Sep 2018 16:07:09 -0700 Subject: [openstack-dev] [all][tc] Last Hours for TC Nominations Message-ID: Hello All, A quick reminder that we are in the last hours for TC candidate announcements. Nominations are open until September 06, 2018 23:45 UTC. If you want to stand for TC, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. Voters should also begin thinking about questions they want to ask candidates during the campaigning period that begins as soon as nominations end. Thank you, - The Election Officials [1] http://governance.openstack.org/election/#how-to-submit-your-candidacy -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Sep 5 23:24:39 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 6 Sep 2018 09:24:39 +1000 Subject: [openstack-dev] [election][tc] TC Candidacy In-Reply-To: <1fac-5b8fb800-1-7199a100@211223829> References: <1fac-5b8fb800-1-7199a100@211223829> Message-ID: <20180905232438.GA31148@thor.bakeyournoodle.com> On Wed, Sep 05, 2018 at 01:04:09PM +0200, jean-philippe at evrard.me wrote: > Hello everyone, > > I am hereby announcing my candidacy for a position on the OpenStack Technical Committee (TC). Hi JP, I don't see a review in openstack/election from you. Are you able to upload one befoer the deadline? Please see: https://governance.openstack.org/election/#how-to-submit-a-candidacy for more information. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From iwienand at redhat.com Wed Sep 5 23:31:24 2018 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 6 Sep 2018 09:31:24 +1000 Subject: [openstack-dev] [all]-ish : Updates required for readthedocs publishers In-Reply-To: <1536163649-sup-7521@lrrr.local> References: <7306a7a3-e42f-22d0-c229-0ce74e1cb2e4@redhat.com> <1536163649-sup-7521@lrrr.local> Message-ID: On 09/06/2018 02:10 AM, Doug Hellmann wrote: > Those instructions and the ones linked at > https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-docs-on-readthedocs > say to "generate a web hook URL". I think you got the correct answers, thanks Dmitry. Note it is also illustrated at https://imgur.com/a/Pp4LH31 Thanks -i From tony at bakeyournoodle.com Wed Sep 5 23:37:53 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 6 Sep 2018 09:37:53 +1000 Subject: [openstack-dev] [all][election] Last day for TC nominations Message-ID: <20180905233752.GB31148@thor.bakeyournoodle.com> Hi folks, A quick reminder that we are in the last hours for TC candidate announcements. Nominations are open until Sep 06, 2018 23:45 UTC. If you want to stand for TC, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. Thank you, [1] http://governance.openstack.org/election/#how-to-submit-your-candidacy Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From zhipengh512 at gmail.com Wed Sep 5 23:49:15 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 6 Sep 2018 07:49:15 +0800 Subject: [openstack-dev] [election] [tc] TC candidacy Message-ID: Hi all, I ran for TC in the last cycle and I guess you can find out more information on the candidacy patch [0], so I won't copy and paste that long article here again :) I found that most of my statement for my last ran is still valid today [0][1]. I want to build strong cross-community collaboration, best practices for project level governance and more innovations for OpenStack. Hopefully second time is a charm and I could bring something fresh and diverse thinking to the group :) [0] https://review.openstack.org/#/c/600279/ [1] https://hannibalhuang.github.io/2018/04/16/i-didn't-build-it/ -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Sep 5 23:54:41 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 6 Sep 2018 09:54:41 +1000 Subject: [openstack-dev] [election][tc] TC Candidacy In-Reply-To: <20180905232438.GA31148@thor.bakeyournoodle.com> References: <1fac-5b8fb800-1-7199a100@211223829> <20180905232438.GA31148@thor.bakeyournoodle.com> Message-ID: <20180905235440.GC31148@thor.bakeyournoodle.com> On Thu, Sep 06, 2018 at 09:24:39AM +1000, Tony Breeds wrote: > Hi JP, > I don't see a review in openstack/election from you. Are you able to > upload one befoer the deadline? > > Please see: https://governance.openstack.org/election/#how-to-submit-a-candidacy > for more information. I really should have checked my email for merged changes before sending this sorry all. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jimmy at openstack.org Thu Sep 6 00:52:06 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 05 Sep 2018 19:52:06 -0500 Subject: [openstack-dev] 6 days left for the Forum Brainstorming Period... Message-ID: <5B907A36.3060901@openstack.org> Hello All! The Forum Brainstorming session ends September 11 and the topic submission phase begins September 12. Thank you to all of the projects that have created a wiki and begun the Brainstorming Phase. I'd like to encourage projects that have not yet created an etherpad to do so at https://wiki.openstack.org/wiki/Forum/Berlin2018 This is an opportunity to get feedback, vet ideas, and garner support from the community on your ideas. Don't rely only on a PTL to make the agenda... step on up and place the items you consider important front and center :) If you have questions or concerns about the process, please don't hesitate to reach out. Cheers, Jimmy From dharmendra.kushwaha at india.nec.com Thu Sep 6 01:16:43 2018 From: dharmendra.kushwaha at india.nec.com (Dharmendra Kushwaha) Date: Thu, 6 Sep 2018 01:16:43 +0000 Subject: [openstack-dev] [Tacker] vPTG Topics gathering Message-ID: Hi all, We are planning to have our one-day virtual PTG meetup for Stein during 20th-21th September. I have created etherpad link [1] for the same. Please put your discussion topics on that. Please do it by Saturday, 8th September, so that I will fix the schedule accordingly. [1]: https://etherpad.openstack.org/p/Tacker-PTG-Stein Thanks & Regards Dharmendra Kushwaha From ekcs.openstack at gmail.com Thu Sep 6 01:23:07 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 05 Sep 2018 18:23:07 -0700 Subject: [openstack-dev] [congress] no IRC meeting 9/14 during PTG week Message-ID: Let's resume on 9/21. Thanks! From zhipengh512 at gmail.com Thu Sep 6 01:28:08 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 6 Sep 2018 09:28:08 +0800 Subject: [openstack-dev] [cyborg]Denver PTG arrangements Message-ID: Hi Team, As we discussed at yesterday's team meeting, please find our schedule for PTG at https://etherpad.openstack.org/p/cyborg-ptg-stein , team dinner information also available :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From jazeltq at gmail.com Thu Sep 6 01:52:14 2018 From: jazeltq at gmail.com (Jaze Lee) Date: Thu, 6 Sep 2018 09:52:14 +0800 Subject: [openstack-dev] [nova][cinder] about unified limits In-Reply-To: References: Message-ID: On Stein only one service? Is there some methods to move this more fast? Lance Bragstad 于2018年9月5日周三 下午9:29写道: > > Not yet. Keystone worked through a bunch of usability improvements with the unified limits API last release and created the oslo.limit library. We have a patch or two left to land in oslo.limit before projects can really start using unified limits [0]. > > We're hoping to get this working with at least one resource in another service (nova, cinder, etc...) in Stein. > > [0] https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init > > On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee wrote: >> >> Hello, >> Does nova and cinder use keystone's unified limits api to do quota job? >> If not, is there a plan to do this? >> Thanks a lot. >> >> -- >> 谦谦君子 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- 谦谦君子 From ianyrchoi at gmail.com Thu Sep 6 02:06:34 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Thu, 6 Sep 2018 11:06:34 +0900 Subject: [openstack-dev] [storyboard] [I18n] Can Storyboard web pages be translatable to multiple languages? Message-ID: Hello, I wanna ask whether https://storyboard.openstack.org/ web pages can be translatable to multiple languages (e.g., Chinese, Japanese, Korean, German, French, Spanish, ...) or not. I have thought such idea on my mind from some time period ago, and I think now would be a good timing for raising this question since it seems that more project repositories are migrating to Storyboard, and internationalization of Storyboard is one of criteria for I18n team to decide to migrate to Storyboard (FYI: discussion on I18n team - [1]). From my very brief investigation, it seems that adding I18n support on html pages like [2], extracting translation source strings to pot files, syncing with Zanata [3] with powerful infra support [4] would make Storyboard translatable with translating to multiple languages by I18n team. Am I understanding correctly and can I18n team get help on such effort? With many thanks, /Ian [1] http://lists.openstack.org/pipermail/openstack-i18n/2018-September/003307.html [2] http://git.openstack.org/cgit/openstack-infra/storyboard-webclient/tree/src/app/stories/template/list.html#n26 [3] http://translate.openstack.org/ [4] https://docs.openstack.org/i18n/latest/infra.html From whayutin at redhat.com Thu Sep 6 02:22:24 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 5 Sep 2018 20:22:24 -0600 Subject: [openstack-dev] ptg, the a-line train horns are still active Message-ID: Greetings, Just some advice to pack earplugs, noise canceling headphones, etc.. for the upcoming Denver PTG. The horns used by a-line train in Denver are still active, and the hotel is right next to the line. Safe travels everyone! [image: 20180904_181031.jpg] -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 20180904_181031.jpg Type: image/jpeg Size: 1792509 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Thu Sep 6 02:59:45 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 5 Sep 2018 22:59:45 -0400 Subject: [openstack-dev] [heat][glance] Heat image resource support issue In-Reply-To: References: Message-ID: On Wed, Sep 5, 2018 at 11:12 AM Rico Lin wrote: > > On Wed, Sep 5, 2018 at 8:47 PM Brian Rosmaita > wrote: > > Since Queens, Glance has had a 'web-download' import method that takes a >> URL [0]. It's enabled by default, but operators do have the ability to >> turn it off. (There's an API call to see what methods are enabled in a >> particular cloud.) Operators also have the ability to restrict what URLs >> are acceptable [1], but that's probably a good thing. >> >> In short, Glance does have the ability to do what you need since Queens, >> but there's no guarantee that it will be available in all clouds and for >> all URLs. If you foresee that as a problem, it would be a good idea to get >> together with the Glance team at the PTG to discuss this issue. Please add >> it as a topic to the Glance PTG planning etherpad [3] as soon as you can. >> > Cool! Thank Brian. > Sounds like something we can use, just one small question in my mind. In > order to use `web-download` in image resource, we need to create an empty > image than use import to upload that imge. I have try that scenrio by > myself now (I'm not really diving into detail yet) by: > 1. create an empty image(like `openstack image create --container-format > bare --disk-format qcow2 img_name`) > 2. and than import image (like `glance image-import --import-method > web-download --uri > https://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img `) > But that image stuck in queued after first step. > dose this scenario supported by glance? Or where did I do wrong? > That scenario should work, unless you are running glance under uwsgi, in which case the task engine (used to import the image) does not run. You can tell that's the problem if as an admin user you use the command 'glance task-list'. You should see a task of type 'api_image_import' with status 'pending'. (You can do 'glance task-show ' to see the details of the task.) If you are using devstack, you can apply this patch before you call stack.sh: https://review.openstack.org/#/c/545483/ . It will allow everything except Glance to run under uwsgi. > > >> >> [0] >> https://developer.openstack.org/api-ref/image/v2/index.html#interoperable-image-import >> [1] >> https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html#configuring-the-web-download-method >> [3] https://etherpad.openstack.org/p/stein-ptg-glance-planning >> > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Thu Sep 6 03:51:10 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 6 Sep 2018 11:51:10 +0800 Subject: [openstack-dev] [heat][glance] Heat image resource support issue In-Reply-To: References: Message-ID: Since we all use devstack as the test environment, I can see it's required that we allow this scenario works for devstack in the gateway. Do you got any plan to fix [1] in short future?:) [1] https://review.openstack.org/#/c/545483/ On Thu, Sep 6, 2018 at 11:00 AM Brian Rosmaita wrote: > > > On Wed, Sep 5, 2018 at 11:12 AM Rico Lin > wrote: > >> >> On Wed, Sep 5, 2018 at 8:47 PM Brian Rosmaita >> wrote: >> >> Since Queens, Glance has had a 'web-download' import method that takes a >>> URL [0]. It's enabled by default, but operators do have the ability to >>> turn it off. (There's an API call to see what methods are enabled in a >>> particular cloud.) Operators also have the ability to restrict what URLs >>> are acceptable [1], but that's probably a good thing. >>> >>> In short, Glance does have the ability to do what you need since Queens, >>> but there's no guarantee that it will be available in all clouds and for >>> all URLs. If you foresee that as a problem, it would be a good idea to get >>> together with the Glance team at the PTG to discuss this issue. Please add >>> it as a topic to the Glance PTG planning etherpad [3] as soon as you can. >>> >> Cool! Thank Brian. >> Sounds like something we can use, just one small question in my mind. In >> order to use `web-download` in image resource, we need to create an empty >> image than use import to upload that imge. I have try that scenrio by >> myself now (I'm not really diving into detail yet) by: >> 1. create an empty image(like `openstack image create --container-format >> bare --disk-format qcow2 img_name`) >> 2. and than import image (like `glance image-import --import-method >> web-download --uri >> https://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img `) >> But that image stuck in queued after first step. >> dose this scenario supported by glance? Or where did I do wrong? >> > > That scenario should work, unless you are running glance under uwsgi, in > which case the task engine (used to import the image) does not run. You > can tell that's the problem if as an admin user you use the command 'glance > task-list'. You should see a task of type 'api_image_import' with status > 'pending'. (You can do 'glance task-show ' to see the details of > the task.) > > If you are using devstack, you can apply this patch before you call > stack.sh: https://review.openstack.org/#/c/545483/ . It will allow > everything except Glance to run under uwsgi. > > >> >> >>> >>> [0] >>> https://developer.openstack.org/api-ref/image/v2/index.html#interoperable-image-import >>> [1] >>> https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html#configuring-the-web-download-method >>> [3] https://etherpad.openstack.org/p/stein-ptg-glance-planning >>> >> >> -- >> May The Force of OpenStack Be With You, >> >> *Rico Lin*irc: ricolin >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Sep 6 04:51:47 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 6 Sep 2018 10:21:47 +0530 Subject: [openstack-dev] [heat][glance] Heat image resource support issue In-Reply-To: References: Message-ID: Hi Rico, We will discuss this during PTG, however meantime you can add WSGI_MODE=mod_wsgi in local.conf for testing purpose. Thanks & Best Regards, Abhishek Kekane On Thu, Sep 6, 2018 at 9:21 AM, Rico Lin wrote: > Since we all use devstack as the test environment, I can see it's required > that we allow this scenario works for devstack in the gateway. Do you got > any plan to fix [1] in short future?:) > > [1] https://review.openstack.org/#/c/545483/ > > On Thu, Sep 6, 2018 at 11:00 AM Brian Rosmaita > wrote: > >> >> >> On Wed, Sep 5, 2018 at 11:12 AM Rico Lin >> wrote: >> >>> >>> On Wed, Sep 5, 2018 at 8:47 PM Brian Rosmaita < >>> rosmaita.fossdev at gmail.com> wrote: >>> >>> Since Queens, Glance has had a 'web-download' import method that takes a >>>> URL [0]. It's enabled by default, but operators do have the ability to >>>> turn it off. (There's an API call to see what methods are enabled in a >>>> particular cloud.) Operators also have the ability to restrict what URLs >>>> are acceptable [1], but that's probably a good thing. >>>> >>>> In short, Glance does have the ability to do what you need since >>>> Queens, but there's no guarantee that it will be available in all clouds >>>> and for all URLs. If you foresee that as a problem, it would be a good >>>> idea to get together with the Glance team at the PTG to discuss this >>>> issue. Please add it as a topic to the Glance PTG planning etherpad [3] as >>>> soon as you can. >>>> >>> Cool! Thank Brian. >>> Sounds like something we can use, just one small question in my mind. In >>> order to use `web-download` in image resource, we need to create an empty >>> image than use import to upload that imge. I have try that scenrio by >>> myself now (I'm not really diving into detail yet) by: >>> 1. create an empty image(like `openstack image create --container-format >>> bare --disk-format qcow2 img_name`) >>> 2. and than import image (like `glance image-import --import-method >>> web-download --uri https://download.cirros-cloud. >>> net/0.3.5/cirros-0.3.5-x86_64-disk.img `) >>> But that image stuck in queued after first step. >>> dose this scenario supported by glance? Or where did I do wrong? >>> >> >> That scenario should work, unless you are running glance under uwsgi, in >> which case the task engine (used to import the image) does not run. You >> can tell that's the problem if as an admin user you use the command 'glance >> task-list'. You should see a task of type 'api_image_import' with status >> 'pending'. (You can do 'glance task-show ' to see the details of >> the task.) >> >> If you are using devstack, you can apply this patch before you call >> stack.sh: https://review.openstack.org/#/c/545483/ . It will allow >> everything except Glance to run under uwsgi. >> >> >>> >>> >>>> >>>> [0] https://developer.openstack.org/api-ref/image/ >>>> v2/index.html#interoperable-image-import >>>> [1] https://docs.openstack.org/glance/latest/admin/ >>>> interoperable-image-import.html#configuring-the-web-download-method >>>> [3] https://etherpad.openstack.org/p/stein-ptg-glance-planning >>>> >>> >>> -- >>> May The Force of OpenStack Be With You, >>> >>> *Rico Lin*irc: ricolin >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Thu Sep 6 05:33:14 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Thu, 6 Sep 2018 13:33:14 +0800 Subject: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests In-Reply-To: <11be89ad-a59a-1fe6-5c7b-badb4a06e643@fried.cc> References: <11be89ad-a59a-1fe6-5c7b-badb4a06e643@fried.cc> Message-ID: I see the patch is still ongoing status and do you have a follow up plan/discussion for that? we are maintaining 2 CIs (z/VM and KVM on z) so skip non-compute related cases will be a good for 3rd part CI .. thanks Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Eric Fried To: "OpenStack Development Mailing List (not for usage questions)" Date: 09/04/2018 09:35 PM Subject: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests Folks- The other day, I posted an experimental patch [1] with an effectively empty ComputeDriver (just enough to make n-cpu actually start) to see how much of our CI would pass. The theory being that any tests that still pass are tests that don't touch our compute driver, and are therefore not useful to run in our CI environment. Because anything that doesn't touch our code should already be well covered by generic dsvm-tempest CIs. The results [2] show that 707 tests still pass. So I'm wondering whether there might be a way to mark tests as being "compute driver-specific" such that we could switch off all the other ones [3] via a one-line conf setting. Because surely this has potential to save a lot of CI resource not just for us but for other driver vendors, in tree and out. Thanks, efried [1] https://review.openstack.org/#/c/599066/ [2] http://184.172.12.213/66/599066/5/check/nova-powervm-out-of-tree-pvm/a1b42d5/powervm_os_ci.html.gz [3] I get that there's still value in running all those tests. But it could be done like once every 10 or 50 or 100 runs instead of every time. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From renat.akhmerov at gmail.com Thu Sep 6 06:34:24 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Thu, 6 Sep 2018 13:34:24 +0700 Subject: [openstack-dev] [mistral] [release] [stable] Cherry-pick migration to stable/rocky In-Reply-To: References: Message-ID: I’m also in favour of backporting it because it solves a real production problem. Thanks Renat Akhmerov @Nokia On 5 Sep 2018, 16:55 +0700, Dougal Matthews , wrote: > > On 5 September 2018 at 10:52, Dougal Matthews wrote: > > > > (Note: I added [release] to the email subject, as I think that will make it visible to the right folks.) > > > > Darn. It should have been [stable]. I have added that now. Sorry for the noise. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiang.edison at gmail.com Thu Sep 6 06:44:18 2018 From: xiang.edison at gmail.com (Edison Xiang) Date: Thu, 6 Sep 2018 14:44:18 +0800 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: <5bf25256-2160-0335-fc37-32a2fb060e8c@redhat.com> References: <5bf25256-2160-0335-fc37-32a2fb060e8c@redhat.com> Message-ID: Hi Dmitry, Thanks your reply. Absolutely you are a sdk expert. > This is a good first step, but if I get it right it does not specify which > response corresponds to which request. You got it. I think it depends on how to use the API schemas. We could define some rules to make sure which response corresponds to which request. For example, by order. Maybe you can give some other suggestions. Also, this is not fresh concept, we can find the choice definition in xsd [1]. By the way, firstly we can sort out OpenStack API schemas , and handle these schemas to the developers and the users, and let them to choose how to use it. [1] https://www.w3schools.com/xml/el_choice.asp Best Regards, Edison Xiang On Tue, Sep 4, 2018 at 7:01 PM Dmitry Tantsur wrote: > Hi, > > On 08/29/2018 08:36 AM, Edison Xiang wrote: > > Hi team, > > > > As we know, Open API 3.0 was released on July, 2017, it is about one > year ago. > > Open API 3.0 support some new features like anyof, oneof and allof than > Open API > > 2.0(Swagger 2.0). > > Now OpenStack projects do not support Open API. > > Also I found some old emails in the Mail List about supporting Open API > 2.0 in > > OpenStack. > > > > Some limitations are mentioned in the Mail List for OpenStack API: > > 1. The POST */action APIs. > > These APIs are exist in lots of projects like nova, cinder. > > These APIs have the same URI but the responses will be different > when the > > request is different. > > 2. Micro versions. > > These are controller via headers, which are sometimes used to > describe > > behavioral changes in an API, not just request/response schema changes. > > > > About the first limitation, we can find the solution in the Open API 3.0. > > The example [2] shows that we can define different request/response in > the same > > URI by anyof feature in Open API 3.0. > > This is a good first step, but if I get it right it does not specify which > response corresponds to which request. > > > > > About the micro versions problem, I think it is not a limitation related > a > > special API Standard. > > We can list all micro versions API schema files in one directory like > nova/V2, > > I don't think this approach will scale if you plan to generate anything > based on > these schemes. If you generate client code from separate schema files, > you'll > essentially end up with dozens of major versions. > > > or we can list the schema changes between micro versions as tempest > project did [3]. > > ++ > > > > > Based on Open API 3.0, it can bring lots of benefits for OpenStack > Community and > > does not impact the current features the Community has. > > For example, we can automatically generate API documents, different > language > > Clients(SDK) maybe for different micro versions, > > From my experience with writing an SDK, I don't believe generating a > complete > SDK from API schemes is useful. Maybe generating low-level protocol code > to base > an SDK on, but even that may be easier to do by hand. > > Dmitry > > > and generate cloud tool adapters for OpenStack, like ansible module, > terraform > > providers and so on. > > Also we can make an API UI to provide an online and visible API search, > API > > Calling for every OpenStack API. > > 3rd party developers can also do some self-defined development. > > > > [1] https://github.com/OAI/OpenAPI-Specification > > [2] > > > https://github.com/edisonxiang/OpenAPI-Specification/blob/master/examples/v3.0/petstore.yaml#L94-L109 > > [3] > > > https://github.com/openstack/tempest/tree/master/tempest/lib/api_schema/response/compute > > > > Best Regards, > > Edison Xiang > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Thu Sep 6 06:52:48 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 6 Sep 2018 01:52:48 -0500 Subject: [openstack-dev] Bumping eventlet to 0.24.1 In-Reply-To: <20180823145013.vzt46kgd7d7lkmkj@gentoo.org> References: <20180823145013.vzt46kgd7d7lkmkj@gentoo.org> Message-ID: <20180906065248.m73g3nhsv4v3imkv@gentoo.org> On 18-08-23 09:50:13, Matthew Thode wrote: > This is your warning, if you have concerns please comment in > https://review.openstack.org/589382 . cross tests pass, so that's a > good sign... atm this is only for stein. > I pushed the big red button. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From dangtrinhnt at gmail.com Thu Sep 6 07:39:13 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 6 Sep 2018 16:39:13 +0900 Subject: [openstack-dev] [Storyboard][Searchlight] Commits of searchlight-specs are not attached to the story Message-ID: Hi, Looks like the commits for searchlight-specs are not attached to the story on Storyboard. Example: Commit: https://review.openstack.org/#/c/600316/ Story: https://storyboard.openstack.org/#!/story/2003677 Is there anything that I need to do to link those 2 together? Thanks, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jazeltq at gmail.com Thu Sep 6 07:52:42 2018 From: jazeltq at gmail.com (Jaze Lee) Date: Thu, 6 Sep 2018 15:52:42 +0800 Subject: [openstack-dev] [blazar] about reservation Message-ID: Hello, I view the source code and do not find the check logic for reservation a instance. It just create a lease, and nova just create a flavor. How do we ensure the resource is really reserved for us? We put the host into a new aggregate? and nobody except blazar will use the host? -- 谦谦君子 From gmann at ghanshyammann.com Thu Sep 6 07:59:40 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 06 Sep 2018 16:59:40 +0900 Subject: [openstack-dev] [election][tc] TC Candidacy Message-ID: <165ade4d83b.eadca46557606.5598003762013755194@ghanshyammann.com> Hi All, I’d like to announce my candidacy for OpenStack Technical Committee position. I am glad to work in OpenStack community and would like to thank all the contributors/leaders who supported me to explore new things which brings out my best for the community. Let me introduce myself, briefly. I have joined the OpenStack community since 2012 as operator and started as full time upstream contributor since 2014 during mid of Ice-House release. Currently, I am PTL for the QA Program since the Rocky cycle and active contributor in QA projects and Nova. Also I have been randomly contributing in many other projects specially on Tempest plugins for bug fix and tempest compatibility changes. Along with that, I am actively involved in programs helping new contributors in OpenStack. 1. As mentor in the Upstream Institute Training since Barcelona Summit (Oct 2016)[1] 2. FirstContact SIG [2] to help new contributors to onboard in OpenStack. It's always a great experience to introduce OpenStack upstream workflow to new contributors and encourage them to start contribution. I feel that is very much needed in OpenStack because of current movement of experience contributors. TC direction has always been valuable and result oriented in technical debt or efforts towards Diversity of community. This kind of work/position never been easy task specially in such a big community like OpenStack. By observing the TC work from past couple of years, I am very much motivated to help in this direction in order to contribute more towards cross projects and collaboration among projects or people. Below are the areas I would like to Focus as TC: * Share Project teams work for Common Goals: Every cycle we have TC goals and some future direction where all the projects needs to start working. Projects try to do their best in that but big challenge for them is resource bandwidth. In Current situation, It is very hard for projects teams to accommodate those work by themselves. Project team are shrinking and key members are overloaded. My Idea is to form a temporary team of contributors under Goal champion and finish those common area work during starting of cycle (so that we can make sure to finish the work well on time and test throughout the cycle). That temporary team can be formed with volunteers from any project team or new part time contributors with help of OUI or FirstContact SIG etc. * Cross-project and cross-community testing: I would like to work more on collaboration of testing effort across projects and community. We have plugins approach for testing in OpenStack and I agree that which is not perfect at this stage. I would like to work on more collaboration and guidelines to improve that area. From QA team point of view, I would like QA team to do more collaborative work for all the projects for their proper testing. And further, extend the testing collaboration among adjacent communities. * Encourage new leaders: new contributors and so new leaders are much required in community. Some internal or external leadership program etc can be very helpful. Regardless of the results of this election I will work hard towards above directions and help community at my best. Thank you for your reading and consideration. - Ghanshyam Mann (gmann) * Review: http://stackalytics.com/?release=all&metric=marks&user_id=ghanshyammann&project_type=all * Commit: http://stackalytics.com/?release=all&metric=commits&user_id=ghanshyammann&project_type=all * Foundation Profile: https://www.openstack.org/community/members/profile/6461 * Website: https://ghanshyammann.com * IRC (Freenode): gmann [1] https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute_Occasions https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute [2] https://wiki.openstack.org/wiki/First_Contact_SIG From gmann at ghanshyammann.com Thu Sep 6 08:35:04 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 06 Sep 2018 17:35:04 +0900 Subject: [openstack-dev] [openstack-operator] [qa] [forum] [berlin] QA Brainstorming Topic ideas for Berlin 2018 Message-ID: <165ae054131.ba353a7c58848.5452108263583664063@ghanshyammann.com> Hi All, I have created the below etherpad to collect the forum ideas related to QA for Berlin Summit. Please write up your ideas with your irc name on etherpad. https://etherpad.openstack.org/p/berlin-stein-forum-qa-brainstorming -gmann From cdent+os at anticdent.org Thu Sep 6 10:05:46 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 6 Sep 2018 11:05:46 +0100 (BST) Subject: [openstack-dev] [nova] [placement] modified devstack using openstack/placement Message-ID: Yesterday I experimented to discover the changes needed in devstack to get it working with the code in openstack/placement. The results are at https://review.openstack.org/#/c/600162/ and it is passing tempest. It isn't passing grenade but that's expected at this stage. Firstly, thanks to everyone who helped this week to create and merge a bunch of placement code to get the repo working. Waking up this morning to see a green tempest was rather nice. Secondly, the work—as expected—exposes a few gaps, most that are already known. If you're not interested in the details, here's a good place to stop reading, but if you are, see below. This is mostly notes, for sake of sharing information, not a plan. Please help me make a plan. 1) To work around the fact that there is currently no "placement-manage db_sync" equivalent I needed to hack up something to make sure the database tables exist. So I faked a "placmeent-manage db table_create". That's in https://review.openstack.org/#/c/600161/ That uses sqlalchemy's 'create_all' functionality to create the tables from their Models, rather than using any migrations. I did it this way for two reasons: 1) I already had code for it in placedock[1] that I could copy, 2) I wanted to set aside migrations for the immediate tests. We'll need to come back to that, because the lack of dealing with already existing tables is _part_ of what is blocking grenade. However, for new installs 'create_all' is fast and correct and something we might want to keep. 2) The grenade jobs don't have 'placement' in $PROJECTS so die during upgrade. 3) The nova upgrade.sh will need some adjustments to do the data migrations we've talked about over the "(technical)" thread. Also we'll need to decide how much of the placement stuff stays in there and how much goes somewhere else. That's all stuff we can work out, especially if some grenade-oriented people join in the fun. One question I have on the lib/placement changes in devstack: Is it useful to make those changes be guarded by a conditional of the form: if placement came from its own repo: do the new stuff else: do the old stuff ? [1] https://github.com/cdent/placedock -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From rico.lin.guanyu at gmail.com Thu Sep 6 10:18:19 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 6 Sep 2018 18:18:19 +0800 Subject: [openstack-dev] [heat][glance] Heat image resource support issue In-Reply-To: References: Message-ID: On Thu, Sep 6, 2018 at 12:52 PM Abhishek Kekane wrote: > Hi Rico, > > We will discuss this during PTG, however meantime you can add > WSGI_MODE=mod_wsgi in local.conf for testing purpose. > Cool, If you can let me know which session it's, I will try to be there if no conflict -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel at redhat.com Thu Sep 6 10:33:12 2018 From: michel at redhat.com (Michel Peterson) Date: Thu, 6 Sep 2018 13:33:12 +0300 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][ceilometer] all requirement updates are currently blocked In-Reply-To: <20180905150309.cxstnk6i2sms6pj4@gentoo.org> References: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> <20180905150309.cxstnk6i2sms6pj4@gentoo.org> Message-ID: On Wed, Sep 5, 2018 at 6:03 PM, Matthew Thode wrote: > On 18-08-31 19:52:09, Matthew Thode wrote: > > The requirements project has a co-installability test for the various > > projects, networking-odl being included. > > > > Because of the way the dependancy on ceilometer is done it is blocking > > all reviews and updates to the requirements project. > > > > http://logs.openstack.org/96/594496/2/check/requirements- > integration/8378cd8/job-output.txt.gz#_2018-08-31_22_54_49_357505 > > The requirements team has gone ahead and made a aweful hack to get gate > unwedged. The commit message is a very good summary of our reasoning > why it has to be this way for now. My comment explains our plan going > forward (there will be a revert prepared as soon as this merges for > instance). > > step 1. merge this > step 2. look into and possibly fix our tooling (why was the gitref > addition not rejected by gate) > step 3. fix networking-odl (release ceilometer) > step 4. unmerge this > I remember that before landing the problematic patch [1] there was some discussion around it. Basically the problem was not n-odl but ceilometer not being in pypi, but we never foresaw this problem. Now that the problem is so critical, the question is how can we, from the n-odl team, help in fixing this? I am open to help in any effort that involves n-odl or any other project. Sorry this message fell through the cracks and I didn't answer before. PS: I'm CCing Mike Kolesnik to this email, as he will be going to the PTG and can represent n-odl. [1] https://review.openstack.org/557370/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Sep 6 10:44:33 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 6 Sep 2018 16:14:33 +0530 Subject: [openstack-dev] [heat][glance] Heat image resource support issue In-Reply-To: References: Message-ID: Hi Rico, Session times are not decided yet, could you please add your topic on [1] so that it will be on discussion list. Also glance sessions are scheduled from Wednesday to Friday between 9 to 5 PM, so you can drop by as per your convenience. [] https://etherpad.openstack.org/p/stein-ptg-glance-planning Thanks & Best Regards, Abhishek Kekane On Thu, Sep 6, 2018 at 3:48 PM, Rico Lin wrote: > > On Thu, Sep 6, 2018 at 12:52 PM Abhishek Kekane > wrote: > >> Hi Rico, >> >> We will discuss this during PTG, however meantime you can add >> WSGI_MODE=mod_wsgi in local.conf for testing purpose. >> > > Cool, If you can let me know which session it's, I will try to be there if > no conflict > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shakhat at gmail.com Thu Sep 6 12:20:13 2018 From: shakhat at gmail.com (Ilya Shakhat) Date: Thu, 6 Sep 2018 14:20:13 +0200 Subject: [openstack-dev] [all]-ish : Updates required for readthedocs publishers In-Reply-To: References: <7306a7a3-e42f-22d0-c229-0ce74e1cb2e4@redhat.com> <1536163649-sup-7521@lrrr.local> Message-ID: What is the process once webhook_id is added to project's zuul.yaml? Should I propose a change into project-config/zuul.d/projects.yaml or it will be done automatically? BTW (not completely related to this topic, but since I'm touching zuul.yaml anyway) - what are the best practices for non-official projects - should zuul configuration stay in project-config or better be moved to local zuul.yaml? Thanks, Ilya чт, 6 сент. 2018 г. в 1:31, Ian Wienand : > On 09/06/2018 02:10 AM, Doug Hellmann wrote: > > Those instructions and the ones linked at > > > https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-docs-on-readthedocs > > say to "generate a web hook URL". > > I think you got the correct answers, thanks Dmitry. Note it is also > illustrated at > > https://imgur.com/a/Pp4LH31 > > Thanks > > -i > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu Sep 6 12:31:56 2018 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 6 Sep 2018 13:31:56 +0100 Subject: [openstack-dev] [blazar] Blazar Forum session brainstorming etherpad Message-ID: Hi everyone, I created an etherpad [1] to gather Berlin Forum session ideas for the Blazar project, or resource reservation in general. Please contribute! Thanks, Pierre [1] https://etherpad.openstack.org/p/Berlin-stein-forum-blazar-brainstorming From mnaser at vexxhost.com Thu Sep 6 12:33:44 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 6 Sep 2018 08:33:44 -0400 Subject: [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> Message-ID: On Wed, Sep 5, 2018 at 12:41 PM Dan Smith wrote: > > > I think there was a period in time where the nova_api database was created > > where entires would try to get pulled out from the original nova database and > > then checking nova_api if it doesn't exist afterwards (or vice versa). One > > of the cases that this was done to deal with was for things like instance types > > or flavours. > > > > I don't know the exact details but I know that older instance types exist in > > the nova db and the newer ones are sitting in nova_api. Something along > > those lines? > > Yep, we've moved entire databases before in nova with minimal disruption > to the users. Not just flavors, but several pieces of data came out of > the "main" database and into the api database transparently. It's > doable, but with placement being split to a separate > project/repo/whatever, there's not really any option for being graceful > about it in this case. > > > At this point, I'm thinking turn off placement, setup the new one, do > > the migration > > of the placement-specific tables (this can be a straightforward documented task > > OR it would be awesome if it was a placement command (something along > > the lines of `placement-manage db import_from_nova`) which would import all > > the right things > > > > The idea of having a command would be *extremely* useful for deployment tools > > in automating the process and it also allows the placement team to selectively > > decide what they want to onboard? > > Well, it's pretty cut-and-dried as all the tables in nova-api are either > for nova or placement, so there's not much confusion about what belongs. > > I'm not sure that doing this import in python is really the most > efficient way. I agree a placement-manage command would be ideal from an > "easy button" point of view, but I think a couple lines of bash that > call mysqldump are likely to vastly outperform us doing it natively in > python. We could script exec()s of those commands from python, but.. I > think I'd rather just see that as a shell script that people can easily > alter/test on their own. > > Just curious, but in your case would the service catalog entry change at > all? If you stand up the new placement in the exact same spot, it > shouldn't, but I imagine some people will have the catalog entry change > slightly (even if just because of a VIP or port change). Am I > remembering correctly that the catalog can get cached in various places > such that much of nova would need a restart to notice? We already have placement in the catalog and it's behind a load balancer so changing the backends resolves things right away, so we likely won't be needing any restarts (and I don't think OSA will either because it uses the same model). > --Dan -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From lbragstad at gmail.com Thu Sep 6 14:00:39 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 6 Sep 2018 09:00:39 -0500 Subject: [openstack-dev] [nova][cinder] about unified limits In-Reply-To: References: Message-ID: I wish there was a better answer for this question, but currently there are only a handful of us working on the initiative. If you, or someone you know, is interested in getting involved, I'll happily help onboard people. On Wed, Sep 5, 2018 at 8:52 PM Jaze Lee wrote: > On Stein only one service? > Is there some methods to move this more fast? > Lance Bragstad 于2018年9月5日周三 下午9:29写道: > > > > Not yet. Keystone worked through a bunch of usability improvements with > the unified limits API last release and created the oslo.limit library. We > have a patch or two left to land in oslo.limit before projects can really > start using unified limits [0]. > > > > We're hoping to get this working with at least one resource in another > service (nova, cinder, etc...) in Stein. > > > > [0] > https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init > > > > On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee wrote: > >> > >> Hello, > >> Does nova and cinder use keystone's unified limits api to do quota > job? > >> If not, is there a plan to do this? > >> Thanks a lot. > >> > >> -- > >> 谦谦君子 > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > 谦谦君子 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Thu Sep 6 14:11:51 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 6 Sep 2018 09:11:51 -0500 Subject: [openstack-dev] [keystone] Stein Forum Brainstorming Message-ID: I can't believe it's already time to start thinking about forum topics, but it's upon us [0]! I've created an etherpad for us to brainstorm ideas that we want to bring to the forum in Germany [1]. I also linked it to the wiki [2]. Please feel free to throw out ideas. We can go through them as a group before the submission phase starts if people wish. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134336.html [1] https://etherpad.openstack.org/p/BER-keystone-forum-sessions [2] https://wiki.openstack.org/wiki/Forum/Berlin2018#Etherpads_from_Teams_and_Working_Groups -------------- next part -------------- An HTML attachment was scrubbed... URL: From lujinluo at gmail.com Thu Sep 6 14:14:15 2018 From: lujinluo at gmail.com (Lujin Luo) Date: Thu, 6 Sep 2018 07:14:15 -0700 Subject: [openstack-dev] [Neutron] [Upgrades] Cancel meeting on 13th Sept. Message-ID: Hello everyone, We are canceling our next Upgrades subteam meeting on 13th September due to PTG. Should you have any questions, please reach out to me on #openstack-neutron. Best regards, Lujin From lbragstad at gmail.com Thu Sep 6 14:14:47 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 6 Sep 2018 09:14:47 -0500 Subject: [openstack-dev] [keystone] No meeting or office hours September 11th Message-ID: I wanted to send out a reminder that we won't be having formal office hours or a team meeting next week due to the PTG. Both will resume on the 18th of September. Thanks, Lance -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Thu Sep 6 14:26:59 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 6 Sep 2018 16:26:59 +0200 Subject: [openstack-dev] [all]-ish : Updates required for readthedocs publishers In-Reply-To: References: <7306a7a3-e42f-22d0-c229-0ce74e1cb2e4@redhat.com> <1536163649-sup-7521@lrrr.local> Message-ID: On 2018-09-06 14:20, Ilya Shakhat wrote: > What is the process once webhook_id is added to project's zuul.yaml? > Should I propose a change into project-config/zuul.d/projects.yaml or it > will be done automatically? You need to send to use docs-on-readthedocs template and set the variable to project-config repo. > BTW (not completely related to this topic, but since I'm touching > zuul.yaml anyway) - what are the best practices for non-official > projects - should zuul configuration stay in project-config or better be > moved to local zuul.yaml? > See https://docs.openstack.org/infra/manual/creators.html#central-config-exceptions for what should stay in project-config. Since the publishing is part of a release, the template needs to stay in project-config, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From ed at leafe.com Thu Sep 6 15:22:07 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 6 Sep 2018 10:22:07 -0500 Subject: [openstack-dev] [placement] Extraction: Phase 2 Message-ID: <8DC1970A-3BAC-42AE-936F-9C7ABC532BE1@leafe.com> Now that the work to create a repo of Placement extracted from Nova that is passing tests has finished, there is still more work to be done. To help organize our efforts, we have created an etherpad [0] for this. At the bottom of that page is a section titled "TODOs for further cleaning up”. As we come across issues, they will be added to that section. If you wish to help out by working on one of those issues, please mark that issue with your name so that we can avoid duplicating effort. And if you have any questions about the issue, please ask for help in the #openstack-placement IRC channel. -- Ed Leafe [0] https://etherpad.openstack.org/p/placement-extract-stein-3 From melwittt at gmail.com Thu Sep 6 15:35:22 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 6 Sep 2018 08:35:22 -0700 Subject: [openstack-dev] [nova] Stein Forum brainstorming Message-ID: <5fc1b1df-bbcd-7061-1f22-bb30a83fdc86@gmail.com> Greetings all, Apparently, we have 6 days left [1] to brainstorm topic ideas for the Forum at the Berlin summit and the submission period begins on September 12. Please feel free to use this etherpad to as a place to capture topic ideas [2]. I've added it to the list of etherpads on the forum wiki [3]. Cheers, -melanie [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134336.html [2] https://etherpad.openstack.org/p/nova-forum-stein [3] https://wiki.openstack.org/wiki/Forum/Berlin2018#Etherpads_from_Teams_and_Working_Groups From david.ames at canonical.com Thu Sep 6 15:50:32 2018 From: david.ames at canonical.com (David Ames) Date: Thu, 6 Sep 2018 08:50:32 -0700 Subject: [openstack-dev] [charms] 18.08 OpenStack Charms release Message-ID: Announcing the 18.08 release of the OpenStack Charms. The 18.08 charms have support for the Rocky OpenStack, Ceph Mimic and Keystone fernet token support. 51 bugs have been fixed and released across the OpenStack charms. For full details of the release, please refer to the release notes: https://docs.openstack.org/charm-guide/latest/1808.html Thanks go to the following contributors for this release: James Page Frode Nordahl David Ames Chris MacNaughton Ryan Beisner Liam Young Corey Bryant Alex Kavanagh Dmitrii Shcherbakov Edward Hope-Morley Van Hung Pham Shane Peters Billy Olsen Nobuto Murata Sean Feole Pete Vander Giessen daixianmeng wangqi Felipe Reyes Nikolay Nikolaev guozj Nicolas Pochet Nam Xav Paice Hua Zhang huang.zhiping Brin Zhang Andrew McLeod Tilman Baumann yuhaijia lvxianguo Roger Yu Chris Sanders Rajat Dhasmana zhangmin From prometheanfire at gentoo.org Thu Sep 6 15:56:25 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 6 Sep 2018 10:56:25 -0500 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][ceilometer] all requirement updates are currently blocked In-Reply-To: References: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> <20180905150309.cxstnk6i2sms6pj4@gentoo.org> Message-ID: <20180906155625.giqoxbsr3i3uzb4p@gentoo.org> On 18-09-06 13:33:12, Michel Peterson wrote: > On Wed, Sep 5, 2018 at 6:03 PM, Matthew Thode > wrote: > > > On 18-08-31 19:52:09, Matthew Thode wrote: > > > The requirements project has a co-installability test for the various > > > projects, networking-odl being included. > > > > > > Because of the way the dependancy on ceilometer is done it is blocking > > > all reviews and updates to the requirements project. > > > > > > http://logs.openstack.org/96/594496/2/check/requirements- > > integration/8378cd8/job-output.txt.gz#_2018-08-31_22_54_49_357505 > > > > The requirements team has gone ahead and made a aweful hack to get gate > > unwedged. The commit message is a very good summary of our reasoning > > why it has to be this way for now. My comment explains our plan going > > forward (there will be a revert prepared as soon as this merges for > > instance). > > > > step 1. merge this > > step 2. look into and possibly fix our tooling (why was the gitref > > addition not rejected by gate) > > step 3. fix networking-odl (release ceilometer) > > step 4. unmerge this > > > > I remember that before landing the problematic patch [1] there was some > discussion around it. Basically the problem was not n-odl but ceilometer > not being in pypi, but we never foresaw this problem. > > Now that the problem is so critical, the question is how can we, from the > n-odl team, help in fixing this? I am open to help in any effort that > involves n-odl or any other project. > > Sorry this message fell through the cracks and I didn't answer before. > > PS: I'm CCing Mike Kolesnik to this email, as he will be going to the PTG > and can represent n-odl. > > [1] https://review.openstack.org/557370/ I think the best choice at this point in time would be to get a ceilometer release onto pypi. At that time you can move to using that version as your project minimum. Just make sure that if you need a new feature you ask them for a release instead of using a git SHA. I'll be at the PTG as well, infra/upgrade/OSA rooms mostly I think. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tpb at dyncloud.net Thu Sep 6 16:19:28 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 6 Sep 2018 12:19:28 -0400 Subject: [openstack-dev] [manila] retrospective and forum brainstorm etherpads Message-ID: <20180906161928.zhja4m4qhzpxshta@barron.net> Devs, Ops, community: We're going to start off the manila PTG sessions Monday with a retrospective on the Rocky cycle, using this etherpad [1]. Please enter your thoughts on what went well and what we should improve in Stein so that we take it into consideration. It's also time (until next Wednesday) to brainstorm topics for Berlin Forum. Please record these here [2]. We'll discuss this subject at the PTG as well. Thanks! -- Tom Barron (tbarron) [1] https://etherpad.openstack.org/p/manila-rocky-retrospective [2] https://etherpad.openstack.org/p/manila-berlin-forum-brainstorm From jungleboyj at gmail.com Thu Sep 6 16:23:44 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 6 Sep 2018 11:23:44 -0500 Subject: [openstack-dev] [cinder][nova][placement] Doodle Calendar Created for Placement Discussion Message-ID: <75599e02-185d-536d-1c00-6465cab63f0a@gmail.com> All, We discussed in our weekly meeting yesterday that it might be good to plan an additional meeting at the PTG to continue discussions with regards to Cinder's use of the Placement Service. I have looked at the room schedule [1] and there are quite a few open rooms on Monday.  Fewer rooms on Tuesday but there are still some options each day. Please fill out the poll [2] if you are interested in attending ASAP and then I will reserve a room as soon as it looks like we have quorum. Thank you! Jay [1] http://ptg.openstack.org/ptg.html [2] https://doodle.com/poll/4twwhy46bxerrthx From dabarren at gmail.com Thu Sep 6 16:39:57 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Thu, 6 Sep 2018 18:39:57 +0200 Subject: [openstack-dev] [Kolla] Denver PTG schedule Message-ID: Hi folks, This is the schedule for Kolla Denver PTG. If someone have a hard conflict with any discussion please let me know if we can find a slot which matches better. Wednesday 9:00 - 9:45 [kolla] Image optimization 9:50 - 10:35 [kolla] Python3 images 10.40 - 11:15 [kolla] Add health checks to kolla images 11:20 - 12:00 [kolla] CI for projects consuming kolla images 12:00 - 13:30 LUNCH 1:30 - 2:15 [kolla-ansible] Compatible OCI runtime 2:20 - 3:05 [kolla-ansible] Backups 3:10 - 3:55 [kolla-ansible] DRY ansible 4:00 - 4:45 [kolla-ansible] Kayobe Thursday 9:00 - 9:45 [kolla-ansible] Dev mode optimization 9:50 - 10:35 [kolla-ansible] Firewall configuration 10.40 - 11:15 [kolla-ansible] Fast-forward upgrade 11:20 - 12:00 [kolla-ansible] Multi release support 12:00 - 13:30 LUNCH 1:30 - 2:15 [kolla-ansible] Cells v2 2:20 - 3:05 [kolla-ansible] Running kolla at scale 3:10 - 3:55 Kolla GUI 4:00 - 4:45 PTG recap and Stein priority setting Friday 9:00 - 9:45 [CI] Service testing and scenarios 9:50 - 10:35 [CI] Upgrade jobs 10.40 - 11:15 [CI] Usage of tempest and rally 11:20 - 12:00 Define PTG TODOs (blueprints, specs, etc) 12:00 - 13:30 LUNCH 1:30 - End of day - Open discussion Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Thu Sep 6 16:44:04 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 6 Sep 2018 11:44:04 -0500 Subject: [openstack-dev] [neutron] PTG agenda In-Reply-To: References: Message-ID: Dear Neutron Team, All the Neutron session will take place in the Vail meeting room, atrium level. Please see attached PDF for further reference See you all there On Tue, Sep 4, 2018 at 12:40 PM, Miguel Lavalle wrote: > Dear Neutron Team, > > I have scheduled all the topics that were proposed for the PTG in Denver > here: https://etherpad.openstack.org/p/neutron-stein-ptg. Please go to > line 128 and onwards to see the detailed schedule. Please reach out to me > should we need to make any changes or adjustments > > Best regards > > Miguel > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 5_Map_RoomAssignment_OnePager5_HR.pdf Type: application/pdf Size: 581941 bytes Desc: not available URL: From dabarren at gmail.com Thu Sep 6 16:47:35 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Thu, 6 Sep 2018 18:47:35 +0200 Subject: [openstack-dev] [kolla][tripleo][openstack-helm][kayobe] Cross-project discussion Message-ID: Hi, Kolla team is going to have 2 cross-project sessions on Wednesday. For Tripleo: Addition of healthchecks into kolla images at 10.40 - 11:15, this is a work For any project consuming kolla images, CI jobs into kolla CI at 11:20 - 12:00 Hope someone of your teams can attend the sessions. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Thu Sep 6 16:50:56 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Thu, 6 Sep 2018 18:50:56 +0200 Subject: [openstack-dev] [kolla] Meeting cancelled Message-ID: Dear Kolla Team, Due to the PTG in Denver, the Kolla weekly meeting on Wednesday 12th is canceled. Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Thu Sep 6 17:00:17 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 6 Sep 2018 12:00:17 -0500 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Once again we did not have anything too controversial to discuss this week. We talked a bit about the Open API 3.0 discussions, but until there is something concrete to work with, there really isn't much for the API-SIG to do other than facilitate these conversations. One SIG member, elmiko, shared his tool for code generation from Open API [9], which led to a broader discussion on code generation in general. Not exactly a pure API-related topic, but interesting nonetheless. As there were no new guidelines to review or bugs to fix, the remaining part of the meeting was to discuss the plans for the API-SIG session [7] at the upcoming Denver PTG [8]. We were hoping to attract a bigger crowd by including a topic like "Running your serverless microversioned blockchain in Kubernetes", but unfortunately cdent liked that topic so much he sold it to some VCs and retired a rich man. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * None # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Add an api-design doc with design advice https://review.openstack.org/592003 * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-sig,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://storyboard.openstack.org/#!/project/1039 [6] https://git.openstack.org/cgit/openstack/api-sig [7] https://etherpad.openstack.org/p/api-sig-stein-ptg [8] https://www.openstack.org/ptg/ [9] https://gitlab.com/elmiko/deswag Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Ed Leafe From gkotton at vmware.com Thu Sep 6 17:03:50 2018 From: gkotton at vmware.com (Gary Kotton) Date: Thu, 6 Sep 2018 17:03:50 +0000 Subject: [openstack-dev] [Neutron] Bug status Message-ID: <73F0B20D-DC09-4CC6-944D-D54C9D47A0C3@vmware.com> Hi, It has been a relatively quiet week and not many issues opened. There are a few bugs with IPv6 subnets. Most have missing information and I have asked the reporters for some extra information. Not any blockers are issues that are concerning. Wishing everyone a productive PTG. Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Sep 6 17:04:34 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 6 Sep 2018 17:04:34 +0000 Subject: [openstack-dev] [Storyboard][Searchlight] Commits of searchlight-specs are not attached to the story In-Reply-To: References: Message-ID: <20180906170434.dpty2di6al6pgqx2@yuggoth.org> On 2018-09-06 16:39:13 +0900 (+0900), Trinh Nguyen wrote: > Looks like the commits for searchlight-specs are not attached to > the story on Storyboard. Example: > > Commit: https://review.openstack.org/#/c/600316/ > Story: https://storyboard.openstack.org/#!/story/2003677 > > Is there anything that I need to do to link those 2 together? In change 600316 you included a "Task: #2619" footer, when the actual task you seem to have wanted to reference was 26199 (task 2619 is associated with unrelated story 2000403). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From aj at suse.com Thu Sep 6 17:13:05 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 6 Sep 2018 19:13:05 +0200 Subject: [openstack-dev] [all] Switching docs jobs to new tox-docs for specs repos Message-ID: I've proposed two changes to change the docs building and publishing of specs repos to use the new PTI openstack-tox-docs job. This means that your the docs job will now run "tox -e docs". A quick check shows that all repos are set up properly - but if suddenly docs building fails for a specs repo, do the same kind of changes you did for your main repo when you switched to "publish-openstack-docs-pti" template. The changes are: https://review.openstack.org/600457 https://review.openstack.org/600458 Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From fungi at yuggoth.org Thu Sep 6 17:33:19 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 6 Sep 2018 17:33:19 +0000 Subject: [openstack-dev] [OpenStack-I18n] [storyboard] [I18n] Can Storyboard web pages be translatable to multiple languages? In-Reply-To: References: Message-ID: <20180906173318.7z5ojmvgfieq6jq6@yuggoth.org> On 2018-09-06 11:06:34 +0900 (+0900), Ian Y. Choi wrote: > I wanna ask whether https://storyboard.openstack.org/ web pages > can be translatable to multiple languages (e.g., Chinese, > Japanese, Korean, German, French, Spanish, ...) or not. [...] As also discussed in #storyboard I think this is an interesting idea, particularly for organizations who may want to use StoryBoard deployments where most users are fluent in one of those other languages (and perhaps not at all with English). I do think interface translation is potentially useful for the OpenStack community's storyboard.openstack.org service too, though we'll want to keep in mind that for most of the projects hosted there English is explicitly preferred to increase collaboration (same as with most OpenStack community mailing lists, IRC channels, code review and so on). We may discover that we need to find ways to point out to people, particularly those arriving there for the first time, that they should file stories in English if at all possible even though the interface may be presented in their personally preferred language instead. I've added this topic to https://etherpad.openstack.org/p/sb-stein-ptg-planning and am looking forward to seeing you in Denver! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jaosorior at redhat.com Thu Sep 6 17:49:08 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Thu, 6 Sep 2018 20:49:08 +0300 Subject: [openstack-dev] [TripleO] Denver PTG Schedule Message-ID: Hello! The Denver schedule is now available, still at the same link: https://etherpad.openstack.org/p/tripleo-ptg-stein And I also made a Google Calendar that folks can follow: https://calendar.google.com/calendar?cid=MjdqZmUwNmN1dWhldDdjYm5vb3RvaGRyZTRAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ In case you would prefer another tool, I also attached the ical file. See you next week! BR -------------- next part -------------- A non-text attachment was scrubbed... Name: tripleo-denver.ical.zip Type: application/zip Size: 4331 bytes Desc: not available URL: From jaosorior at redhat.com Thu Sep 6 17:52:07 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Thu, 6 Sep 2018 20:52:07 +0300 Subject: [openstack-dev] [TripleO] Meeting cancelled Message-ID: Due to folks being at the Denver PTG (including myself) there won't be the weekly meeting next week. BR From johnsomor at gmail.com Thu Sep 6 18:00:41 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 6 Sep 2018 11:00:41 -0700 Subject: [openstack-dev] [octavia] Weekly IRC meeting cancelled Sept. 12th Message-ID: Hello Octavia community, As many of us will be attending the OpenStack PTG next week, I am cancelling the weekly Octavia IRC meeting on September 12th. We will resume our normal schedule on September 19th. Michael From jaosorior at redhat.com Thu Sep 6 18:01:04 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Thu, 6 Sep 2018 21:01:04 +0300 Subject: [openstack-dev] [TripleO] Stein Forum @ Berlin Brainstorming Message-ID: Hey folks! It's time to come up with topics to discuss on the forum in Berlin[1]! There is an etherpad for us to bring up ideas: https://etherpad.openstack.org/p/tripleo-forum-stein We need to submit by September 12. Here's is also the link to the wiki: https://wiki.openstack.org/wiki/Forum/Berlin2018#Etherpads_from_Teams_and_Working_Groups Best Regards [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134336.html From aj at suse.com Thu Sep 6 18:10:34 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 6 Sep 2018 20:10:34 +0200 Subject: [openstack-dev] [all][infra] Moving cover job from post to check pipeline Message-ID: <9a0076f2-8677-2d36-5dfd-f3a3ba8a0d26@suse.com> Citing Ian Wienand in [2] "There was a thread some time ago that suggested coverage jobs weren't doing much in the "post" pipeline because nobody looks at them and the change numbers may be difficult to find anyway [1]. This came up again in a cleanup to add non-voting coverage jobs in I5c42530d1dda41b8dc8c13cdb10458745bec7bcc. There really is no consistency across projects; it seems like a couple of different approaches have been cargo-cult copied as new projects came in, depending on which random project was used as a template. This change does a cleanup by moving all post coverage jobs into the check queue as non-voting jobs." I've updated Ian's change [2] now and propose to move ahead with it - and suggest that projects with in-repo coverage job follow it as well. Let's use the new template [3] openstack-cover-jobs (and it's -horizon/-neutron variants) for this change. Andreas [1] http://lists.openstack.org/pipermail/openstack-dev/2016-July/099491.html [2] https://review.openstack.org/#/c/432836/ [3] https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-openstack-cover-jobs -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From rico.lin.guanyu at gmail.com Thu Sep 6 18:49:00 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 7 Sep 2018 02:49:00 +0800 Subject: [openstack-dev] [election][tc] Announcing candidacy Message-ID: Dear all, I'm announcing my candidacy for a position on the OpenStack Technical Committee. I'm Rico Lin. I have been in this community since 2014. And been deeply involved with technical contributions [1], I start from working with heat, which allows me to work on integration and management resources from multiple projects. I have served as Heat PTL for two years. Which allows me to learn better on how we can join users and operators' experiences and requirements and integrated with development workflow and technical decision processes. Here are my major goals with this seat in TC: * Cross-community integrations (K8s, CloudFoundry, Ceph, OPNFV) * Provide guidelines * Strong the structure of SIGs * Application Infra * Cross-cooperation between Users, Operators, and Developers * Diversity I'm already trying to put my goals to actions, still would really hope to join Technical Committee to bring more attention to those domains Thank you for your consideration. Best Regards, Rico Lin IRC: ricolin Twitter: @ricolintw https://www.openstack.org/community/members/profile/33346/rico-lin http://stackalytics.com/?release=all&user_id=rico-lin&metric=person-day Here I put some explanations for my goals: - Cross-community integrations (K8s, CloudFoundry, Ceph, OPNFV): This is a long-term goal for our community, but would really like to see this getting more scenario for use cases, and a more clear target for development. As we talk about Edge, AI, etc. It's essential to bring real use cases into this integration( like StarlingX bring some requirements cross-projects to real use cases). On the other hand, K8s-SIG, Self-healing sig, FEMDC SIG are all some nice place for this kind of interacting and integrating to happen. - Provide guidelines: There is one WIP guideline from First Contact SIG I particular interesting on. The `Guidelines for Organisations Contributing to OpenStack` [4]. This is something I believe is quite important for showing how can organizations interacting with OpenStack community correctly. I try to work on the same goal from event to event as well (give presentations like [5]). There are some other guidelines that need to update/renew as well (most of us, who already reading ML and work in the community for long, might no longer require to read guidelines, but remember, whoever try to join now a day, still require an up-to-date guideline to give them hints). - Strong the structure of SIGs: As I show in above two goals, SIGs plays some important roles. I do like to trigger discussions on how can we strengthen the structure of SIGs. Make them more efficient and become someplace for users and ops can directly interact with developers. For real use cases like an Edge computing use case issue, or self-healing service issues. I can't think of a better place than FEMDC SIG and Self-healing SIG to record and target these issues. We might be able to allow Opts to feedback issues on SIG's StoryBoard and ask project teams to connect and review with it. There might be multiple ways to do this. So would really like to trigger this discussion. - Application Infra: We've updated our resolution with [3] and saying we care about what applications needs on top of OpenStack. As for jobs from few projects are taking the role and thinking about what application needs, we should provide help with setting up community goals, making resolutions, or define what are top priority applications (can be a short-term definition) we need to focus on and taking action items/guidelines and finding weaknesses, so others from the community can follow (if they agree with the goals, but got no idea on what they can help, IMO this will be a good stuff). - Cross-cooperation between Users, Operators, and Developers: We have been losing some communication cross Users, Operators, and Developers. And it's never a good thing when users can share use cases, ops shares experiences, developers shares code, but they won't make it to one another not if a user provides developers by them self. In this case, works like StoryBoard should be in our first priority. We need a more solid way to bring user feedback to developers, so we can actually learn what's working or not for each feature. And maybe it's considerable, to strengthen the communication between TCs and UCs (User Committee). We take some steps (like merge PTG and Ops-meetup) to this goal, but I believe we can make the interacting more active. - Diversity: The math is easy. [2] shows we got around one-third of users from Asia (with 75% of users in China). Also IIRC, around the same percentage of developers. But we got 0 in TC. The actual works are hard. We need forwards our technical guideline to developers in Asia and provide chances to get more feedback from them, so we can provide better technical resolutions which should be able to tight developers all together. Which I think I'm a good candidate for this. [1] http://stackalytics.com/?release=all&user_id=rico-lin&metric=person-day [2] https://www.openstack.org/assets/survey/OpenStack-User-Survey-Nov17.pdf [3] https://review.openstack.org/#/c/447031/5/resolutions/20170317-cloud-applications-mission.rst [4] http://lists.openstack.org/pipermail/openstack-dev/2018-August/134072.html [5] https://www.slideshare.net/GuanYuLin1/embrace-community-embrace-a-better-life -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Thu Sep 6 19:21:30 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 6 Sep 2018 15:21:30 -0400 Subject: [openstack-dev] [manila] No meeting 13 September 2018 Message-ID: <20180906192130.ybu7pjcc6ccrx2wl@barron.net> Manila folks, You likely already know, but we won't have our regular community meeting on irc next week because we'll be doing the PTG. See you there! -- Tom Barron (tbarron) From mriedemos at gmail.com Thu Sep 6 19:31:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 6 Sep 2018 14:31:01 -0500 Subject: [openstack-dev] OpenStack Summit Forum in Berlin: Topic Selection Process In-Reply-To: References: <5B86CF2E.5010708@openstack.org> Message-ID: <2b8ade00-b686-8fb8-e303-9ac25898b33b@gmail.com> On 8/29/2018 1:08 PM, Jim Rollenhagen wrote: > On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur > wrote: > > > Examples of typical sessions that make for a great Forum: > > Strategic, whole-of-community discussions, to think about the big > picture, including beyond just one release cycle and new technologies > > e.g. OpenStack One Platform for containers/VMs/Bare Metal (Strategic > session) the entire community congregates to share opinions on how > to make OpenStack achieve its integration engine goal > > > Just to clarify some speculation going on in IRC: this is an example, > right? Not a new thing being announced? > > // jim FYI for those that didn't see this on the other ML: http://lists.openstack.org/pipermail/foundation/2018-August/002617.html -- Thanks, Matt From openstack at fried.cc Thu Sep 6 19:41:32 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 6 Sep 2018 14:41:32 -0500 Subject: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests In-Reply-To: References: <11be89ad-a59a-1fe6-5c7b-badb4a06e643@fried.cc> Message-ID: <1b586dfd-594f-3f44-b6f3-8b232aa0ab5b@fried.cc> Jichen- That patch is not ever intended to merge; hope that was clear from the start :) It was just a proving ground to demonstrate which tests still pass when there's effectively no compute driver in play. We haven't taken any action on this from our end, though we have done a little brainstorming about how we would tool our CI to skip those tests most (but not all) of the time. Happy to share our experiences with you if/as we move forward with that. Regarding the tempest-level automation, I certainly had z in mind when I was thinking about it. If you have the time and inclination to help look into it, that would be most welcome. Thanks, efried On 09/06/2018 12:33 AM, Chen CH Ji wrote: > I see the patch is still ongoing status and do you have a follow up > plan/discussion for that? we are maintaining 2 CIs (z/VM and KVM on z) > so skip non-compute related cases will be a good for 3rd part CI .. thanks > > Best Regards! > > Kevin (Chen) Ji 纪 晨 > > Engineer, zVM Development, CSTL > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > Phone: +86-10-82451493 > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > District, Beijing 100193, PRC > > Inactive hide details for Eric Fried ---09/04/2018 09:35:09 PM---Folks- > The other day, I posted an experimental patch [1] withEric Fried > ---09/04/2018 09:35:09 PM---Folks- The other day, I posted an > experimental patch [1] with an effectively > > From: Eric Fried > To: "OpenStack Development Mailing List (not for usage questions)" > > Date: 09/04/2018 09:35 PM > Subject: [openstack-dev] [tempest][CI][nova compute] Skipping > non-compute-driver tests > > ------------------------------------------------------------------------ > > > > Folks- > > The other day, I posted an experimental patch [1] with an effectively > empty ComputeDriver (just enough to make n-cpu actually start) to see > how much of our CI would pass. The theory being that any tests that > still pass are tests that don't touch our compute driver, and are > therefore not useful to run in our CI environment. Because anything that > doesn't touch our code should already be well covered by generic > dsvm-tempest CIs. The results [2] show that 707 tests still pass. > > So I'm wondering whether there might be a way to mark tests as being > "compute driver-specific" such that we could switch off all the other > ones [3] via a one-line conf setting. Because surely this has potential > to save a lot of CI resource not just for us but for other driver > vendors, in tree and out. > > Thanks, > efried > > [1] https://review.openstack.org/#/c/599066/ > [2] > http://184.172.12.213/66/599066/5/check/nova-powervm-out-of-tree-pvm/a1b42d5/powervm_os_ci.html.gz > [3] I get that there's still value in running all those tests. But it > could be done like once every 10 or 50 or 100 runs instead of every time. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Thu Sep 6 19:56:53 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 6 Sep 2018 19:56:53 +0000 Subject: [openstack-dev] OpenStack Summit Forum in Berlin: Topic Selection Process In-Reply-To: <2b8ade00-b686-8fb8-e303-9ac25898b33b@gmail.com> References: <5B86CF2E.5010708@openstack.org> <2b8ade00-b686-8fb8-e303-9ac25898b33b@gmail.com> Message-ID: <20180906195653.xarf2dusohaki55t@yuggoth.org> On 2018-09-06 14:31:01 -0500 (-0500), Matt Riedemann wrote: > On 8/29/2018 1:08 PM, Jim Rollenhagen wrote: > > On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur > > wrote: > > > > > > Examples of typical sessions that make for a great Forum: > > > > Strategic, whole-of-community discussions, to think about the big > > picture, including beyond just one release cycle and new technologies > > > > e.g. OpenStack One Platform for containers/VMs/Bare Metal (Strategic > > session) the entire community congregates to share opinions on how > > to make OpenStack achieve its integration engine goal > > > > > > Just to clarify some speculation going on in IRC: this is an example, > > right? Not a new thing being announced? > > > > // jim > > FYI for those that didn't see this on the other ML: > > http://lists.openstack.org/pipermail/foundation/2018-August/002617.html [...] While I agree that's a great post to point out to all corners of the community, I don't see what it has to do with whether "OpenStack One Platform for containers/VMs/Bare Metal" was an example forum topic. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From lbragstad at gmail.com Thu Sep 6 20:01:01 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 6 Sep 2018 15:01:01 -0500 Subject: [openstack-dev] [stable][keystone] python3 goal progress and tox_install.sh removal Message-ID: I'm noticing some odd cases with respect to the python 3 community goal [0]. So far my findings are specific to keystone repositories, but I can imagine this affecting other projects. Doug generated the python 3 reviews for keystone repositories, including the ones for stable branches. We noticed some issues with the ones proposed to stable (keystoneauth, python-keystoneclient) and master (keystonemiddleware). For example, python-keystoneclient's stable/pike [1] and stable/ocata [2] branches are both failing with something like [3]: ERROR: You must give at least one requirement to install (see "pip help install") Both of those branches still use tox_install.sh [4][5]. Master, stable/rocky, and stable/queens do not, which passed fine. It was suggested that we backport patches to the failing branches that remove tox_install.sh (similar to [6]). I've attempted to do this for python-keystoneclient, keystonemiddleware, and keystoneauth. The keystonemiddleware patches specifically are hitting a weird case, where they either fail tests due to issues installing keystonemiddleware itself, or pass tests and fail the requirements check. I'm guessing (because I don't really fully understand the whole issue yet) this is because keystonemiddleware has an optional dependency for tests and somehow the installation process worked with tox_install.sh and doesn't work with the new way we do things with pip and zuul. I've attempted to remove tox_install.sh using several approaches with keystonemiddleware master [7]. None of which passed both unit tests and the requirements check. I'm wondering if anyone has a definitive summary or context on tox_install.sh and removing it cleanly for cases like keystonemiddleware. Additionally, is anyone else noticing issues like this with their stable branches? [0] https://governance.openstack.org/tc/goals/stein/python3-first.html [1] https://review.openstack.org/#/c/597685/ [2] https://review.openstack.org/#/c/597679/ [3] http://logs.openstack.org/85/597685/1/check/build-openstack-sphinx-docs/4f817dd/job-output.txt.gz#_2018-08-29_20_49_17_877448 [4] https://git.openstack.org/cgit/openstack/python-keystoneclient/tree/tools/tox_install.sh?h=stable/pike [5] https://git.openstack.org/cgit/openstack/python-keystoneclient/tree/tools/tox_install.sh?h=stable/ocata [6] https://review.openstack.org/#/c/524828/3 [7] https://review.openstack.org/#/c/599003/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Sep 6 20:03:52 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 6 Sep 2018 15:03:52 -0500 Subject: [openstack-dev] OpenStack Summit Forum in Berlin: Topic Selection Process In-Reply-To: <20180906195653.xarf2dusohaki55t@yuggoth.org> References: <5B86CF2E.5010708@openstack.org> <2b8ade00-b686-8fb8-e303-9ac25898b33b@gmail.com> <20180906195653.xarf2dusohaki55t@yuggoth.org> Message-ID: <077492a5-5875-2a5a-0ed6-7529bbb74f91@gmail.com> On 9/6/2018 2:56 PM, Jeremy Stanley wrote: > On 2018-09-06 14:31:01 -0500 (-0500), Matt Riedemann wrote: >> On 8/29/2018 1:08 PM, Jim Rollenhagen wrote: >>> On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur >> > wrote: >>> >>> >>> Examples of typical sessions that make for a great Forum: >>> >>> Strategic, whole-of-community discussions, to think about the big >>> picture, including beyond just one release cycle and new technologies >>> >>> e.g. OpenStack One Platform for containers/VMs/Bare Metal (Strategic >>> session) the entire community congregates to share opinions on how >>> to make OpenStack achieve its integration engine goal >>> >>> >>> Just to clarify some speculation going on in IRC: this is an example, >>> right? Not a new thing being announced? >>> >>> // jim >> FYI for those that didn't see this on the other ML: >> >> http://lists.openstack.org/pipermail/foundation/2018-August/002617.html > [...] > > While I agree that's a great post to point out to all corners of the > community, I don't see what it has to do with whether "OpenStack One > Platform for containers/VMs/Bare Metal" was an example forum topic. Because if I'm not mistaken it was the impetus for the hullabaloo in the tc channel that was related to the foundation ML post. -- Thanks, Matt From openstack at nemebean.com Thu Sep 6 20:28:38 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 6 Sep 2018 15:28:38 -0500 Subject: [openstack-dev] [stable][keystone] python3 goal progress and tox_install.sh removal In-Reply-To: References: Message-ID: On 09/06/2018 03:01 PM, Lance Bragstad wrote: > I'm noticing some odd cases with respect to the python 3 community goal > [0]. So far my findings are specific to keystone repositories, but I can > imagine this affecting other projects. > > Doug generated the python 3 reviews for keystone repositories, including > the ones for stable branches. We noticed some issues with the ones > proposed to stable (keystoneauth, python-keystoneclient) and master > (keystonemiddleware). For example, python-keystoneclient's stable/pike > [1] and stable/ocata [2] branches are both failing with something like [3]: > > ERROR: You must give at least one requirement to install (see "pip help > install") We ran into this on the Oslo stable branches too. Instead of trying to migrate everything, we just tweaked tox_install.sh to avoid the problem: https://github.com/openstack/oslo.concurrency/commit/a009e7c86901473ac2273cb9ba604dc2a6b579d3 > > Both of those branches still use tox_install.sh [4][5]. Master, > stable/rocky, and stable/queens do not, which passed fine. It was > suggested that we backport patches to the failing branches that remove > tox_install.sh (similar to [6]). I've attempted to do this for > python-keystoneclient, keystonemiddleware, and keystoneauth. > > The keystonemiddleware patches specifically are hitting a weird case, > where they either fail tests due to issues installing keystonemiddleware > itself, or pass tests and fail the requirements check. I'm guessing > (because I don't really fully understand the whole issue yet) this is > because keystonemiddleware has an optional dependency for tests and > somehow the installation process worked with tox_install.sh and doesn't > work with the new way we do things with pip and zuul. > > I've attempted to remove tox_install.sh using several approaches with > keystonemiddleware master [7]. None of which passed both unit tests and > the requirements check. > > I'm wondering if anyone has a definitive summary or context on > tox_install.sh and removing it cleanly for cases like > keystonemiddleware. Additionally, is anyone else noticing issues like > this with their stable branches? > > [0] https://governance.openstack.org/tc/goals/stein/python3-first.html > [1] https://review.openstack.org/#/c/597685/ > [2] https://review.openstack.org/#/c/597679/ > [3] > http://logs.openstack.org/85/597685/1/check/build-openstack-sphinx-docs/4f817dd/job-output.txt.gz#_2018-08-29_20_49_17_877448 > [4] > https://git.openstack.org/cgit/openstack/python-keystoneclient/tree/tools/tox_install.sh?h=stable/pike > [5] > https://git.openstack.org/cgit/openstack/python-keystoneclient/tree/tools/tox_install.sh?h=stable/ocata > [6] https://review.openstack.org/#/c/524828/3 > [7] https://review.openstack.org/#/c/599003/ > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From petebirley at gmail.com Thu Sep 6 20:42:48 2018 From: petebirley at gmail.com (Pete Birley) Date: Thu, 6 Sep 2018 15:42:48 -0500 Subject: [openstack-dev] [openstack-helm] [Openstack-Helm] Denver PTG Schedule Message-ID: Hey! The Openstack-Helm team has put together a rough schedule and outline for the PTG: - https://etherpad.openstack.org/p/openstack-helm-ptg-stein Really looking forward to seeing everyone all in Denver :) Cheers Pete -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Sep 6 20:58:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 6 Sep 2018 15:58:41 -0500 Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction Message-ID: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> I wanted to recap some upgrade-specific stuff from today outside of the other [1] technical extraction thread. Chris has a change up for review [2] which prompted the discussion. That change makes placement only work with placement.conf, not nova.conf, but does get a passing tempest run in the devstack patch [3]. The main issue here is upgrades. If you think of this like deprecating config options, the old config options continue to work for a release and then are dropped after a full release (or 3 months across boundaries for CDers) [4]. Given that, Chris's patch would break the standard deprecation policy. Clearly one simple way outside of code to make that work is just copy and rename nova.conf to placement.conf and voila. But that depends on *all* deployment/config tooling to get that right out of the gate. The other obvious thing is the database. The placement repo code as-is today still has the check for whether or not it should use the placement database but falls back to using the nova_api database [5]. So technically you could point the extracted placement at the same nova_api database and it should work. However, at some point deployers will clearly need to copy the placement-related tables out of the nova_api DB to a new placement DB and make sure the 'migrate_version' table is dropped so that placement DB schema versions can reset to 1. With respect to grenade and making this work in our own upgrade CI testing, we have I think two options (which might not be mutually exclusive): 1. Make placement support using nova.conf if placement.conf isn't found for Stein with lots of big warnings that it's going away in T. Then Rocky nova.conf with the nova_api database configuration just continues to work for placement in Stein. I don't think we then have any grenade changes to make, at least in Stein for upgrading *from* Rocky. Assuming fresh devstack installs in Stein use placement.conf and a placement-specific database, then upgrades from Stein to T should also be OK with respect to grenade, but likely punts the cut-over issue for all other deployment projects (because we don't CI with grenade doing Rocky->Stein->T, or FFU in other words). 2. If placement doesn't support nova.conf in Stein, then grenade will require an (exceptional) [6] from-rocky upgrade script which will (a) write out placement.conf fresh and (b) run a DB migration script, likely housed in the placement repo, to create the placement database and copy the placement-specific tables out of the nova_api database. Any script like this is likely needed regardless of what we do in grenade because deployers will need to eventually do this once placement would drop support for using nova.conf (if we went with option 1). That's my attempt at a summary. It's going to be very important that operators and deployment project contributors weigh in here if they have strong preferences either way, and note that we can likely do both options above - grenade could do the fresh cutover from rocky to stein but we allow running with nova.conf and nova_api DB in placement in stein with plans to drop that support in T. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/subject.html#134184 [2] https://review.openstack.org/#/c/600157/ [3] https://review.openstack.org/#/c/600162/ [4] https://governance.openstack.org/tc/reference/tags/assert_follows-standard-deprecation.html#requirements [5] https://github.com/openstack/placement/blob/fb7c1909/placement/db_api.py#L27 [6] https://docs.openstack.org/grenade/latest/readme.html#theory-of-upgrade -- Thanks, Matt From fungi at yuggoth.org Thu Sep 6 21:06:50 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 6 Sep 2018 21:06:50 +0000 Subject: [openstack-dev] OpenStack Summit Forum in Berlin: Topic Selection Process In-Reply-To: <077492a5-5875-2a5a-0ed6-7529bbb74f91@gmail.com> References: <5B86CF2E.5010708@openstack.org> <2b8ade00-b686-8fb8-e303-9ac25898b33b@gmail.com> <20180906195653.xarf2dusohaki55t@yuggoth.org> <077492a5-5875-2a5a-0ed6-7529bbb74f91@gmail.com> Message-ID: <20180906210650.ss7p4k67viqiu6wg@yuggoth.org> On 2018-09-06 15:03:52 -0500 (-0500), Matt Riedemann wrote: > On 9/6/2018 2:56 PM, Jeremy Stanley wrote: > > On 2018-09-06 14:31:01 -0500 (-0500), Matt Riedemann wrote: > > > On 8/29/2018 1:08 PM, Jim Rollenhagen wrote: > > > > On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur > > > > wrote: > > > > > > > > > > > > Examples of typical sessions that make for a great Forum: > > > > > > > > Strategic, whole-of-community discussions, to think about the big > > > > picture, including beyond just one release cycle and new technologies > > > > > > > > e.g. OpenStack One Platform for containers/VMs/Bare Metal (Strategic > > > > session) the entire community congregates to share opinions on how > > > > to make OpenStack achieve its integration engine goal > > > > > > > > > > > > Just to clarify some speculation going on in IRC: this is an example, > > > > right? Not a new thing being announced? > > > > > > > > // jim > > > FYI for those that didn't see this on the other ML: > > > > > > http://lists.openstack.org/pipermail/foundation/2018-August/002617.html > > [...] > > > > While I agree that's a great post to point out to all corners of the > > community, I don't see what it has to do with whether "OpenStack One > > Platform for containers/VMs/Bare Metal" was an example forum topic. > > Because if I'm not mistaken it was the impetus for the hullabaloo in the tc > channel that was related to the foundation ML post. It would be more accurate to say that community surprise over the StarlingX mention in Vancouver keynotes caused some people to (either actually or merely in half-jest) start looking for subtext everywhere indicating the next big surprise announcement. The discussion[*] in #openstack-tc readily acknowledged that most of its participants didn't think "OpenStack One Platform for containers/VMs/Bare Metal" was an actual proposal for a forum discussion much less announcement of a new project, but were just looking for an opportunity to show feigned alarm and sarcasm. The most recent discussion[**] leading up to the foundation ML "OSF Open Infrastructure Projects" update occurred the previous week. That E-mail did go out the day after the forum topic brainstorming example discussion, but was unrelated (and already in the process of being put together by then). [*] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-29.log.html#t2018-08-29T16:55:37 [**] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-23.log.html#t2018-08-23T16:23:00 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Thu Sep 6 21:25:53 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 06 Sep 2018 17:25:53 -0400 Subject: [openstack-dev] [stable][keystone] python3 goal progress and tox_install.sh removal In-Reply-To: References: Message-ID: <1536268836-sup-7453@lrrr.local> Excerpts from Lance Bragstad's message of 2018-09-06 15:01:01 -0500: > I'm noticing some odd cases with respect to the python 3 community goal > [0]. So far my findings are specific to keystone repositories, but I can > imagine this affecting other projects. > > Doug generated the python 3 reviews for keystone repositories, including > the ones for stable branches. We noticed some issues with the ones proposed > to stable (keystoneauth, python-keystoneclient) and master > (keystonemiddleware). For example, python-keystoneclient's stable/pike [1] > and stable/ocata [2] branches are both failing with something like [3]: > > ERROR: You must give at least one requirement to install (see "pip help > install") > > Both of those branches still use tox_install.sh [4][5]. Master, > stable/rocky, and stable/queens do not, which passed fine. It was suggested > that we backport patches to the failing branches that remove tox_install.sh > (similar to [6]). I've attempted to do this for python-keystoneclient, > keystonemiddleware, and keystoneauth. > > The keystonemiddleware patches specifically are hitting a weird case, where > they either fail tests due to issues installing keystonemiddleware itself, The "installing itself" problem is related to the fact that the library under test is also listed in the constraints list and the deps list in tox.ini contains ".[audit_notifications]", which tries to install the library under test while the library is constrained. The simplest thing to do to fix that is probably just add those test dependencies to test-requirements.txt. > or pass tests and fail the requirements check. I'm guessing (because I The requirements check looks legit: Requirement for package stestr has no lower bound And I indeed don't see a >= value for stestr in the test-requirements.txt file. > don't really fully understand the whole issue yet) this is because > keystonemiddleware has an optional dependency for tests and somehow the > installation process worked with tox_install.sh and doesn't work with the > new way we do things with pip and zuul. > > I've attempted to remove tox_install.sh using several approaches with > keystonemiddleware master [7]. None of which passed both unit tests and the > requirements check. > > I'm wondering if anyone has a definitive summary or context on > tox_install.sh and removing it cleanly for cases like keystonemiddleware. > Additionally, is anyone else noticing issues like this with their stable > branches? > > [0] https://governance.openstack.org/tc/goals/stein/python3-first.html > [1] https://review.openstack.org/#/c/597685/ > [2] https://review.openstack.org/#/c/597679/ > [3] > http://logs.openstack.org/85/597685/1/check/build-openstack-sphinx-docs/4f817dd/job-output.txt.gz#_2018-08-29_20_49_17_877448 > [4] > https://git.openstack.org/cgit/openstack/python-keystoneclient/tree/tools/tox_install.sh?h=stable/pike > [5] > https://git.openstack.org/cgit/openstack/python-keystoneclient/tree/tools/tox_install.sh?h=stable/ocata > [6] https://review.openstack.org/#/c/524828/3 > [7] https://review.openstack.org/#/c/599003/ From kennelson11 at gmail.com Thu Sep 6 23:47:48 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 6 Sep 2018 16:47:48 -0700 Subject: [openstack-dev] [TC][All] Nominations End & Campaigning Begins Message-ID: Hello All! The TC Nomination period is now over. The official candidate list is available on the election website[0]. Now begins the campaigning period where candidates and electorate may debate their statements. Polling will start Sep 18, 2018 23:45 UTC. Thank you, -The Election Officials [0] http://governance.openstack.org/election/#stein-tc-candidates -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Thu Sep 6 23:54:47 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 7 Sep 2018 08:54:47 +0900 Subject: [openstack-dev] [Storyboard][Searchlight] Commits of searchlight-specs are not attached to the story In-Reply-To: <20180906170434.dpty2di6al6pgqx2@yuggoth.org> References: <20180906170434.dpty2di6al6pgqx2@yuggoth.org> Message-ID: Hi Jeremy, Oops. Thanks. I didn't notice that :) *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Fri, Sep 7, 2018 at 2:04 AM Jeremy Stanley wrote: > On 2018-09-06 16:39:13 +0900 (+0900), Trinh Nguyen wrote: > > Looks like the commits for searchlight-specs are not attached to > > the story on Storyboard. Example: > > > > Commit: https://review.openstack.org/#/c/600316/ > > Story: https://storyboard.openstack.org/#!/story/2003677 > > > > Is there anything that I need to do to link those 2 together? > > In change 600316 you included a "Task: #2619" footer, when the > actual task you seem to have wanted to reference was 26199 (task > 2619 is associated with unrelated story 2000403). > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From persia at shipstone.jp Fri Sep 7 00:28:11 2018 From: persia at shipstone.jp (Emmet Hikory) Date: Fri, 7 Sep 2018 09:28:11 +0900 Subject: [openstack-dev] [all][tc][election] TC Election Campaigning Period Message-ID: <20180907002811.GA22418@shipstone.jp> Developers, The TC Election Campaigning Period has now started (1). During the next couple days, you are all encouraged to ask the candidates questions about their platforms (2), opinions on OpenStack, community governance, and anything else that will help you to better determine how you will vote. This is the time to raise any issues you wish the future TC to consider, and to evaluate the opinions of the nominees prior to their election. Candidates, Each of you has posted a platform (2), and announced your nomination to the developers. From this point, you are encouraged to ask each other questions about the posted platforms, and begin discussion of any points that you feel are particularly important during the next cycle. While you are not yet TC members, your voices and opinions about the issues raised in your platforms and questions raised by the wider community will help ensure that the future TC has the widest possible input on the matters of community concern, and the electorate has the best information to determine the ideal TC composition to address these and other issues that may arise. Scheduling Note: This cycle, the campaigning season overlaps with PTG. While our community strives to have many of our persistent communications in fora that permit participation by all, in-person discussions on some issues raised in campaigning will be inevitable. For those that are attending PTG, if you feel that an in-person discussion related to campaigning is of interest to the wider community, please summarise it to the mailing list. For those not attending, if you have a query raised on the mailing list that you feel is not receiving attention, consider asking someone attending PTG to relay the question to a candidate. 1: https://governance.openstack.org/election/ 2: http://git.openstack.org/cgit/openstack/election/tree/candidates/stein/TC -- Emmet HIKORY From rochelle.grober at huawei.com Fri Sep 7 00:39:59 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Fri, 7 Sep 2018 00:39:59 +0000 Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction In-Reply-To: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> References: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> Message-ID: Sounds like an important discussion to have with the operators in Denver. Should put this on the schedule for the Ops meetup. --Rocky > -----Original Message----- > From: Matt Riedemann [mailto:mriedemos at gmail.com] > Sent: Thursday, September 06, 2018 1:59 PM > To: OpenStack Development Mailing List (not for usage questions) > ; openstack- > operators at lists.openstack.org > Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade- > specific news on extraction > > I wanted to recap some upgrade-specific stuff from today outside of the > other [1] technical extraction thread. > > Chris has a change up for review [2] which prompted the discussion. > > That change makes placement only work with placement.conf, not > nova.conf, but does get a passing tempest run in the devstack patch [3]. > > The main issue here is upgrades. If you think of this like deprecating config > options, the old config options continue to work for a release and then are > dropped after a full release (or 3 months across boundaries for CDers) [4]. > Given that, Chris's patch would break the standard deprecation policy. Clearly > one simple way outside of code to make that work is just copy and rename > nova.conf to placement.conf and voila. But that depends on *all* > deployment/config tooling to get that right out of the gate. > > The other obvious thing is the database. The placement repo code as-is > today still has the check for whether or not it should use the placement > database but falls back to using the nova_api database [5]. So technically you > could point the extracted placement at the same nova_api database and it > should work. However, at some point deployers will clearly need to copy the > placement-related tables out of the nova_api DB to a new placement DB and > make sure the 'migrate_version' table is dropped so that placement DB > schema versions can reset to 1. > > With respect to grenade and making this work in our own upgrade CI testing, > we have I think two options (which might not be mutually > exclusive): > > 1. Make placement support using nova.conf if placement.conf isn't found for > Stein with lots of big warnings that it's going away in T. Then Rocky nova.conf > with the nova_api database configuration just continues to work for > placement in Stein. I don't think we then have any grenade changes to make, > at least in Stein for upgrading *from* Rocky. Assuming fresh devstack installs > in Stein use placement.conf and a placement-specific database, then > upgrades from Stein to T should also be OK with respect to grenade, but > likely punts the cut-over issue for all other deployment projects (because we > don't CI with grenade doing > Rocky->Stein->T, or FFU in other words). > > 2. If placement doesn't support nova.conf in Stein, then grenade will require > an (exceptional) [6] from-rocky upgrade script which will (a) write out > placement.conf fresh and (b) run a DB migration script, likely housed in the > placement repo, to create the placement database and copy the placement- > specific tables out of the nova_api database. Any script like this is likely > needed regardless of what we do in grenade because deployers will need to > eventually do this once placement would drop support for using nova.conf (if > we went with option 1). > > That's my attempt at a summary. It's going to be very important that > operators and deployment project contributors weigh in here if they have > strong preferences either way, and note that we can likely do both options > above - grenade could do the fresh cutover from rocky to stein but we allow > running with nova.conf and nova_api DB in placement in stein with plans to > drop that support in T. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018- > September/subject.html#134184 > [2] https://review.openstack.org/#/c/600157/ > [3] https://review.openstack.org/#/c/600162/ > [4] > https://governance.openstack.org/tc/reference/tags/assert_follows- > standard-deprecation.html#requirements > [5] > https://github.com/openstack/placement/blob/fb7c1909/placement/db_api > .py#L27 > [6] https://docs.openstack.org/grenade/latest/readme.html#theory-of- > upgrade > > -- > > Thanks, > > Matt > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emccormick at cirrusseven.com Fri Sep 7 01:29:00 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Thu, 6 Sep 2018 21:29:00 -0400 Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction In-Reply-To: References: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> Message-ID: On Thu, Sep 6, 2018, 8:40 PM Rochelle Grober wrote: > Sounds like an important discussion to have with the operators in Denver. > Should put this on the schedule for the Ops meetup. > > --Rocky > We are planning to attend the upgrade sessions on Monday as a group. How about we put it there? -Erik > > > -----Original Message----- > > From: Matt Riedemann [mailto:mriedemos at gmail.com] > > Sent: Thursday, September 06, 2018 1:59 PM > > To: OpenStack Development Mailing List (not for usage questions) > > ; openstack- > > operators at lists.openstack.org > > Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade- > > specific news on extraction > > > > I wanted to recap some upgrade-specific stuff from today outside of the > > other [1] technical extraction thread. > > > > Chris has a change up for review [2] which prompted the discussion. > > > > That change makes placement only work with placement.conf, not > > nova.conf, but does get a passing tempest run in the devstack patch [3]. > > > > The main issue here is upgrades. If you think of this like deprecating > config > > options, the old config options continue to work for a release and then > are > > dropped after a full release (or 3 months across boundaries for CDers) > [4]. > > Given that, Chris's patch would break the standard deprecation policy. > Clearly > > one simple way outside of code to make that work is just copy and rename > > nova.conf to placement.conf and voila. But that depends on *all* > > deployment/config tooling to get that right out of the gate. > > > > The other obvious thing is the database. The placement repo code as-is > > today still has the check for whether or not it should use the placement > > database but falls back to using the nova_api database [5]. So > technically you > > could point the extracted placement at the same nova_api database and it > > should work. However, at some point deployers will clearly need to copy > the > > placement-related tables out of the nova_api DB to a new placement DB and > > make sure the 'migrate_version' table is dropped so that placement DB > > schema versions can reset to 1. > > > > With respect to grenade and making this work in our own upgrade CI > testing, > > we have I think two options (which might not be mutually > > exclusive): > > > > 1. Make placement support using nova.conf if placement.conf isn't found > for > > Stein with lots of big warnings that it's going away in T. Then Rocky > nova.conf > > with the nova_api database configuration just continues to work for > > placement in Stein. I don't think we then have any grenade changes to > make, > > at least in Stein for upgrading *from* Rocky. Assuming fresh devstack > installs > > in Stein use placement.conf and a placement-specific database, then > > upgrades from Stein to T should also be OK with respect to grenade, but > > likely punts the cut-over issue for all other deployment projects > (because we > > don't CI with grenade doing > > Rocky->Stein->T, or FFU in other words). > > > > 2. If placement doesn't support nova.conf in Stein, then grenade will > require > > an (exceptional) [6] from-rocky upgrade script which will (a) write out > > placement.conf fresh and (b) run a DB migration script, likely housed in > the > > placement repo, to create the placement database and copy the placement- > > specific tables out of the nova_api database. Any script like this is > likely > > needed regardless of what we do in grenade because deployers will need to > > eventually do this once placement would drop support for using nova.conf > (if > > we went with option 1). > > > > That's my attempt at a summary. It's going to be very important that > > operators and deployment project contributors weigh in here if they have > > strong preferences either way, and note that we can likely do both > options > > above - grenade could do the fresh cutover from rocky to stein but we > allow > > running with nova.conf and nova_api DB in placement in stein with plans > to > > drop that support in T. > > > > [1] > > http://lists.openstack.org/pipermail/openstack-dev/2018- > > September/subject.html#134184 > > [2] https://review.openstack.org/#/c/600157/ > > [3] https://review.openstack.org/#/c/600162/ > > [4] > > https://governance.openstack.org/tc/reference/tags/assert_follows- > > standard-deprecation.html#requirements > > [5] > > https://github.com/openstack/placement/blob/fb7c1909/placement/db_api > > .py#L27 > > [6] https://docs.openstack.org/grenade/latest/readme.html#theory-of- > > upgrade > > > > -- > > > > Thanks, > > > > Matt > > > > __________________________________________________________ > > ________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev- > > request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jazeltq at gmail.com Fri Sep 7 01:55:37 2018 From: jazeltq at gmail.com (Jaze Lee) Date: Fri, 7 Sep 2018 09:55:37 +0800 Subject: [openstack-dev] [nova][cinder] about unified limits In-Reply-To: References: Message-ID: Lance Bragstad 于2018年9月6日周四 下午10:01写道: > > I wish there was a better answer for this question, but currently there are only a handful of us working on the initiative. If you, or someone you know, is interested in getting involved, I'll happily help onboard people. Well,I can recommend some my colleges to work on this. I wish in S, all service can use unified limits to do quota job. > > On Wed, Sep 5, 2018 at 8:52 PM Jaze Lee wrote: >> >> On Stein only one service? >> Is there some methods to move this more fast? >> Lance Bragstad 于2018年9月5日周三 下午9:29写道: >> > >> > Not yet. Keystone worked through a bunch of usability improvements with the unified limits API last release and created the oslo.limit library. We have a patch or two left to land in oslo.limit before projects can really start using unified limits [0]. >> > >> > We're hoping to get this working with at least one resource in another service (nova, cinder, etc...) in Stein. >> > >> > [0] https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init >> > >> > On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee wrote: >> >> >> >> Hello, >> >> Does nova and cinder use keystone's unified limits api to do quota job? >> >> If not, is there a plan to do this? >> >> Thanks a lot. >> >> >> >> -- >> >> 谦谦君子 >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -- >> 谦谦君子 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- 谦谦君子 From xiang.edison at gmail.com Fri Sep 7 02:03:52 2018 From: xiang.edison at gmail.com (Edison Xiang) Date: Fri, 7 Sep 2018 10:03:52 +0800 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> References: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> Message-ID: Hey gilles, Thanks your introduction for GraphQL and Relay. > GraphQL and OpenAPI have a different feature scope and both have pros and cons. I totally agree with you. They can work together. Right now, I think we have no more work to adapt OpenStack APIs for Open API. Firstly we could sort out Open API schemas base on the current OpenStack APIs. and then we can discuss how to use it. About the micro version, we discuss with your team mate dmitry in another email [1] [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134202.html Best Regards, Edison Xiang On Tue, Sep 4, 2018 at 8:37 AM Gilles Dubreuil wrote: > > > On 30/08/18 13:56, Edison Xiang wrote: > > Hi Ed Leafe, > > Thanks your reply. > Open API defines a standard interface description for REST APIs. > Open API 3.0 can make a description(schema) for current OpenStack REST API. > It will not change current OpenStack API. > I am not a GraphQL expert. I look up something about GraphQL. > In my understanding, GraphQL will get current OpenAPI together and provide > another APIs based on Relay, > > > Not sure what you mean here, could you please develop? > > > and Open API is used to describe REST APIs and GraphQL is used to describe > Relay APIs. > > > There is no such thing as "Relay APIs". > GraphQL povides a de-facto API Schema and Relay provides extensions on top > to facilitate re-fetching, paging and more. > GraphQL and OpenAPI have a different feature scope and both have pros and > cons. > GraphQL is delivering API without using REST verbs as all requests are > undone using POST and its data. > Beyond that what would be great (and it will ultimately come) is to have > both of them working together. > > The idea of the GraphQL Proof of Concept is see what it can bring and at > what cost such as effort and trade-offs. > And to compare this against the effort to adapt OpenStack APIs to use Open > API. > > BTW what's the status of Open API 3.0 in regards of Microversion? > > Regards, > Gilles > > > Best Regards, > Edison Xiang > > On Wed, Aug 29, 2018 at 9:33 PM Ed Leafe wrote: > >> On Aug 29, 2018, at 1:36 AM, Edison Xiang wrote: >> > >> > As we know, Open API 3.0 was released on July, 2017, it is about one >> year ago. >> > Open API 3.0 support some new features like anyof, oneof and allof than >> Open API 2.0(Swagger 2.0). >> > Now OpenStack projects do not support Open API. >> > Also I found some old emails in the Mail List about supporting Open API >> 2.0 in OpenStack. >> >> There is currently an effort by some developers to investigate the >> possibility of using GraphQL with OpenStack APIs. What would Open API 3.0 >> provide that GraphQL would not? I’m asking because I don’t know enough >> about Open API to compare them. >> >> >> -- Ed Leafe >> >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email: gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Sep 7 02:43:20 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 07 Sep 2018 11:43:20 +0900 Subject: [openstack-dev] [qa] Canceling next week QA office hours due to PTG Message-ID: <165b1e9973c.10f62562887317.6093707149453697071@ghanshyammann.com> Hi All, As many of the QA folks will be in PTG, I am canceling the QA office hours for next week. Same will be resumed after PTG on 20th Sept. -gmann From zhipengh512 at gmail.com Fri Sep 7 03:02:33 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 7 Sep 2018 11:02:33 +0800 Subject: [openstack-dev] [publiccloud-wg]PTG session prep Message-ID: Hi Folks, For those of you are interested in the openstack public cloud, please take a look at the etherpad for our agenda[0] , you are more than welcomed to suggest/add new topics ! [0] https://etherpad.openstack.org/p/publiccloud-wg-stein-ptg -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Sep 7 04:18:47 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 7 Sep 2018 14:18:47 +1000 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][Telemetry] all requirement updates are currently blocked In-Reply-To: Message-ID: <20180907041847.GF31148@thor.bakeyournoodle.com> On Thu, Sep 06, 2018 at 01:33:12PM +0300, Michel Peterson wrote: > I remember that before landing the problematic patch [1] there was some > discussion around it. Basically the problem was not n-odl but ceilometer > not being in pypi, but we never foresaw this problem. > > Now that the problem is so critical, the question is how can we, from the > n-odl team, help in fixing this? I am open to help in any effort that > involves n-odl or any other project. As other have pointed out we can just ask the Telemetry team (PTL on CC) why we can't publish ceilometer to pypi? https://pypi.org/project/ceilometer/ certainly seems to be the right project. The crux of the code issue is: from ceilometer.network.statistics import driver in networking_odl/ceilometer/network/statistics/opendaylight_v2/driver.py If this is supposed to be used they way you are how are prjects supposed to get the ceilometer code? Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Fri Sep 7 04:28:06 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 7 Sep 2018 14:28:06 +1000 Subject: [openstack-dev] [mistral] [release] [stable] Cherry-pick migration to stable/rocky In-Reply-To: References: Message-ID: <20180907042805.GG31148@thor.bakeyournoodle.com> On Wed, Sep 05, 2018 at 10:54:25AM +0100, Dougal Matthews wrote: > On 5 September 2018 at 10:52, Dougal Matthews wrote: > > > (Note: I added [release] to the email subject, as I think that will make > > it visible to the right folks.) > > > > Darn. It should have been [stable]. I have added that now. Sorry for the > noise. Backporting a migration like that is OK as long as you don't skip migrations, that is to say revision '030' of the database should be the same on all branches. Given we've only just released rocky I expect that will be the case here. You absolutely must have a release note and call it out as upgrade impact and of course this is a minor release not a patch release. If y'all push a release note (probably on master too?) then I'm okay with the backport and release Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From aj at suse.com Fri Sep 7 04:46:53 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 7 Sep 2018 06:46:53 +0200 Subject: [openstack-dev] [product] Retiring development-proposals Message-ID: <6e34fec0-c3ae-f634-6764-38219465c9c3@suse.com> The product WG is abandoned, so it's time to retire the development-proposals repository. The content of the WG is published and thus available for reference: http://specs.openstack.org/openstack/development-proposals/ I've started the retirement process using topic retire-productwg following https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project , Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From muroi.masahito at lab.ntt.co.jp Fri Sep 7 05:01:10 2018 From: muroi.masahito at lab.ntt.co.jp (Masahito MUROI) Date: Fri, 7 Sep 2018 14:01:10 +0900 Subject: [openstack-dev] [blazar] about reservation In-Reply-To: References: Message-ID: <95e05624-a17c-c0f7-cad1-b4bec83907f9@lab.ntt.co.jp> Hello Jaze, In general, Blazar ensures its instance scheduling by a combination of its flavor or scheduler hint and Nova scheduler filter. In case of the instance reservation, a instance with the flavor is scheduled to a reserved slot on a specific hypervisor. Please see the spec page for the details. https://docs.openstack.org/blazar/latest/specs/pike/new-instance-reservation.html best regards, Masahito On 2018/09/06 16:52, Jaze Lee wrote: > Hello, > I view the source code and do not find the check logic for > reservation a instance. It just create a lease, and nova just create a > flavor. > How do we ensure the resource is really reserved for us? > We put the host into a new aggregate? and nobody except blazar will use > the host? > From dangtrinhnt at gmail.com Fri Sep 7 05:09:11 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 7 Sep 2018 14:09:11 +0900 Subject: [openstack-dev] [Searchlight] Cancel team meeting next week Message-ID: Hi team, Due to the Denver PTG next week (Sep 10 - Sep 14), we will cancel our team meeting on Thursday, 13 Sep. Cheers, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Sep 7 06:18:13 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 07 Sep 2018 15:18:13 +0900 Subject: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests In-Reply-To: <1b586dfd-594f-3f44-b6f3-8b232aa0ab5b@fried.cc> References: <11be89ad-a59a-1fe6-5c7b-badb4a06e643@fried.cc> <1b586dfd-594f-3f44-b6f3-8b232aa0ab5b@fried.cc> Message-ID: <165b2ae50f1.11ea66ed588256.3160647352194636247@ghanshyammann.com> ---- On Fri, 07 Sep 2018 04:41:32 +0900 Eric Fried wrote ---- > Jichen- > > That patch is not ever intended to merge; hope that was clear from the > start :) It was just a proving ground to demonstrate which tests still > pass when there's effectively no compute driver in play. > > We haven't taken any action on this from our end, though we have done a > little brainstorming about how we would tool our CI to skip those tests > most (but not all) of the time. Happy to share our experiences with you > if/as we move forward with that. > > Regarding the tempest-level automation, I certainly had z in mind when > I was thinking about it. If you have the time and inclination to help > look into it, that would be most welcome. Sorry for late response, somehow i missed this thread. As you mentioned and noticed in your patch that there are ~700 tests which does not touch compute driver. Most of them are from neutron-tempest-plugins or other service tests. From Tempest compute tests, many of them are negative tests or DB layer tests [1]. neutron-tempest-plugin or other service test you can always avoid to run with regex. And i do not think compute negative or DB test will take much time to run. But still if you want to avoid to run then, I think it is easy to maintain a whitelist regex file on CI side which can run only compute driver tests(61 in this case). Tagging compute driver on tempest side is little hard to maintain i feel and it can goes out of date very easily. If you have any other idea on that, we can surly talk in PTG on this. [1] http://184.172.12.213/66/599066/5/check/nova-powervm-out-of-tree-pvm/a1b42d5/powervm_os_ci.html.gz > > Thanks, > efried > > On 09/06/2018 12:33 AM, Chen CH Ji wrote: > > I see the patch is still ongoing status and do you have a follow up > > plan/discussion for that? we are maintaining 2 CIs (z/VM and KVM on z) > > so skip non-compute related cases will be a good for 3rd part CI .. thanks > > > > Best Regards! > > > > Kevin (Chen) Ji 纪 晨 > > > > Engineer, zVM Development, CSTL > > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > > Phone: +86-10-82451493 > > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > > District, Beijing 100193, PRC > > > > Inactive hide details for Eric Fried ---09/04/2018 09:35:09 PM---Folks- > > The other day, I posted an experimental patch [1] withEric Fried > > ---09/04/2018 09:35:09 PM---Folks- The other day, I posted an > > experimental patch [1] with an effectively > > > > From: Eric Fried > > To: "OpenStack Development Mailing List (not for usage questions)" > > > > Date: 09/04/2018 09:35 PM > > Subject: [openstack-dev] [tempest][CI][nova compute] Skipping > > non-compute-driver tests > > > > ------------------------------------------------------------------------ > > > > > > > > Folks- > > > > The other day, I posted an experimental patch [1] with an effectively > > empty ComputeDriver (just enough to make n-cpu actually start) to see > > how much of our CI would pass. The theory being that any tests that > > still pass are tests that don't touch our compute driver, and are > > therefore not useful to run in our CI environment. Because anything that > > doesn't touch our code should already be well covered by generic > > dsvm-tempest CIs. The results [2] show that 707 tests still pass. > > > > So I'm wondering whether there might be a way to mark tests as being > > "compute driver-specific" such that we could switch off all the other > > ones [3] via a one-line conf setting. Because surely this has potential > > to save a lot of CI resource not just for us but for other driver > > vendors, in tree and out. > > > > Thanks, > > efried > > > > [1] https://review.openstack.org/#/c/599066/ > > [2] > > http://184.172.12.213/66/599066/5/check/nova-powervm-out-of-tree-pvm/a1b42d5/powervm_os_ci.html.gz > > [3] I get that there's still value in running all those tests. But it > > could be done like once every 10 or 50 or 100 runs instead of every time. > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From pierre.gaxatte at gmail.com Fri Sep 7 07:21:43 2018 From: pierre.gaxatte at gmail.com (Pierre Gaxatte) Date: Fri, 7 Sep 2018 09:21:43 +0200 Subject: [openstack-dev] [mistral] [release] [stable] Cherry-pick migration to stable/rocky In-Reply-To: <20180907042805.GG31148@thor.bakeyournoodle.com> References: <20180907042805.GG31148@thor.bakeyournoodle.com> Message-ID: > Backporting a migration like that is OK as long as you don't skip > migrations, that is to say revision '030' of the database should be the > same on all branches. Given we've only just released rocky I expect > that will be the case here. > > You absolutely must have a release note and call it out as upgrade impact > and of course this is a minor release not a patch release. The release note was not in the initial change so here it is: https://review.openstack.org/#/c/600018 Any input is welcome as the wording and content might not be exactly what you expect. Regards, Pierre From skaplons at redhat.com Fri Sep 7 07:30:23 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Fri, 7 Sep 2018 09:30:23 +0200 Subject: [openstack-dev] [neutron] Pep8 job failures Message-ID: <23EDBF8F-6221-43E4-9320-74D070A4F97E@redhat.com> Hi, Recently bump of eventlet to 0.24.0 was merged in requirements repo [1]. That caused issue in Neutron and pep8 job is now failing. See [2]. So if You have pep8 failed in Your patch with error like in [3] please don’t recheck job - it will not help :) Patch to fix that is already proposed [4]. When it will be merged, please rebase Your patch and this issue should be solved. [1] https://review.openstack.org/#/c/589382/ [2] https://bugs.launchpad.net/neutron/+bug/1791178 [3] http://logs.openstack.org/37/382037/73/gate/openstack-tox-pep8/7f200e6/job-output.txt.gz#_2018-09-06_17_48_34_700485 [4] https://review.openstack.org/600565 — Slawek Kaplonski Senior software engineer Red Hat From tony at bakeyournoodle.com Fri Sep 7 07:38:52 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 7 Sep 2018 17:38:52 +1000 Subject: [openstack-dev] [stable][keystone] python3 goal progress and tox_install.sh removal In-Reply-To: References: Message-ID: <20180907073851.GA16495@thor.bakeyournoodle.com> On Thu, Sep 06, 2018 at 03:01:01PM -0500, Lance Bragstad wrote: > I'm noticing some odd cases with respect to the python 3 community goal > [0]. So far my findings are specific to keystone repositories, but I can > imagine this affecting other projects. > > Doug generated the python 3 reviews for keystone repositories, including > the ones for stable branches. We noticed some issues with the ones proposed > to stable (keystoneauth, python-keystoneclient) and master > (keystonemiddleware). For example, python-keystoneclient's stable/pike [1] > and stable/ocata [2] branches are both failing with something like [3]: > > ERROR: You must give at least one requirement to install (see "pip help > install") I've updated 1 and 2 to do the same thing that lots of other repos do and just exit 0 in this case. 1 and 2 now have a +1 from zuul. > I've attempted to remove tox_install.sh using several approaches with > keystonemiddleware master [7]. None of which passed both unit tests and the > requirements check. Doug pointed out the fix here, which I added. It passed most of the gate but failed in an unrelated neutron test so I've rechecked it. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From julien at danjou.info Fri Sep 7 09:09:15 2018 From: julien at danjou.info (Julien Danjou) Date: Fri, 07 Sep 2018 11:09:15 +0200 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][Telemetry] all requirement updates are currently blocked In-Reply-To: <20180907041847.GF31148@thor.bakeyournoodle.com> (Tony Breeds's message of "Fri, 7 Sep 2018 14:18:47 +1000") References: <20180907041847.GF31148@thor.bakeyournoodle.com> Message-ID: On Fri, Sep 07 2018, Tony Breeds wrote: > On Thu, Sep 06, 2018 at 01:33:12PM +0300, Michel Peterson wrote: > >> I remember that before landing the problematic patch [1] there was some >> discussion around it. Basically the problem was not n-odl but ceilometer >> not being in pypi, but we never foresaw this problem. >> >> Now that the problem is so critical, the question is how can we, from the >> n-odl team, help in fixing this? I am open to help in any effort that >> involves n-odl or any other project. > > As other have pointed out we can just ask the Telemetry team (PTL on CC) > why we can't publish ceilometer to pypi? You can, I've already said +1 on a review a few weeks ago. :) > https://pypi.org/project/ceilometer/ certainly seems to be the right > project. > > The crux of the code issue is: > from ceilometer.network.statistics import driver > > in networking_odl/ceilometer/network/statistics/opendaylight_v2/driver.py > > If this is supposed to be used they way you are how are prjects supposed > to get the ceilometer code? > > > > Yours Tony. > -- Julien Danjou // Free Software hacker // https://julien.danjou.info -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From jichenjc at cn.ibm.com Fri Sep 7 09:12:40 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Fri, 7 Sep 2018 17:12:40 +0800 Subject: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests In-Reply-To: <165b2ae50f1.11ea66ed588256.3160647352194636247@ghanshyammann.com> References: <11be89ad-a59a-1fe6-5c7b-badb4a06e643@fried.cc> <1b586dfd-594f-3f44-b6f3-8b232aa0ab5b@fried.cc> <165b2ae50f1.11ea66ed588256.3160647352194636247@ghanshyammann.com> Message-ID: Thanks for the confirmation, I agree use internal maintain CI whitelist is a good way, and I confirmed with our CI folks that we already did so and more can be removed per Eric's test result below, so we will compare and remove unnecessary cases to get more bandwidth to run 3rd party CI. thanks Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Ghanshyam Mann To: "OpenStack Development Mailing List \\" Date: 09/07/2018 02:18 PM Subject: Re: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests ---- On Fri, 07 Sep 2018 04:41:32 +0900 Eric Fried wrote ---- > Jichen- > > That patch is not ever intended to merge; hope that was clear from the > start :) It was just a proving ground to demonstrate which tests still > pass when there's effectively no compute driver in play. > > We haven't taken any action on this from our end, though we have done a > little brainstorming about how we would tool our CI to skip those tests > most (but not all) of the time. Happy to share our experiences with you > if/as we move forward with that. > > Regarding the tempest-level automation, I certainly had z in mind when > I was thinking about it. If you have the time and inclination to help > look into it, that would be most welcome. Sorry for late response, somehow i missed this thread. As you mentioned and noticed in your patch that there are ~700 tests which does not touch compute driver. Most of them are from neutron-tempest-plugins or other service tests. From Tempest compute tests, many of them are negative tests or DB layer tests [1]. neutron-tempest-plugin or other service test you can always avoid to run with regex. And i do not think compute negative or DB test will take much time to run. But still if you want to avoid to run then, I think it is easy to maintain a whitelist regex file on CI side which can run only compute driver tests(61 in this case). Tagging compute driver on tempest side is little hard to maintain i feel and it can goes out of date very easily. If you have any other idea on that, we can surly talk in PTG on this. [1] http://184.172.12.213/66/599066/5/check/nova-powervm-out-of-tree-pvm/a1b42d5/powervm_os_ci.html.gz > > Thanks, > efried > > On 09/06/2018 12:33 AM, Chen CH Ji wrote: > > I see the patch is still ongoing status and do you have a follow up > > plan/discussion for that? we are maintaining 2 CIs (z/VM and KVM on z) > > so skip non-compute related cases will be a good for 3rd part CI .. thanks > > > > Best Regards! > > > > Kevin (Chen) Ji 纪 晨 > > > > Engineer, zVM Development, CSTL > > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > > Phone: +86-10-82451493 > > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > > District, Beijing 100193, PRC > > > > Inactive hide details for Eric Fried ---09/04/2018 09:35:09 PM---Folks- > > The other day, I posted an experimental patch [1] withEric Fried > > ---09/04/2018 09:35:09 PM---Folks- The other day, I posted an > > experimental patch [1] with an effectively > > > > From: Eric Fried > > To: "OpenStack Development Mailing List (not for usage questions)" > > > > Date: 09/04/2018 09:35 PM > > Subject: [openstack-dev] [tempest][CI][nova compute] Skipping > > non-compute-driver tests > > > > ------------------------------------------------------------------------ > > > > > > > > Folks- > > > > The other day, I posted an experimental patch [1] with an effectively > > empty ComputeDriver (just enough to make n-cpu actually start) to see > > how much of our CI would pass. The theory being that any tests that > > still pass are tests that don't touch our compute driver, and are > > therefore not useful to run in our CI environment. Because anything that > > doesn't touch our code should already be well covered by generic > > dsvm-tempest CIs. The results [2] show that 707 tests still pass. > > > > So I'm wondering whether there might be a way to mark tests as being > > "compute driver-specific" such that we could switch off all the other > > ones [3] via a one-line conf setting. Because surely this has potential > > to save a lot of CI resource not just for us but for other driver > > vendors, in tree and out. > > > > Thanks, > > efried > > > > [1] https://review.openstack.org/#/c/599066/ > > [2] > > http://184.172.12.213/66/599066/5/check/nova-powervm-out-of-tree-pvm/a1b42d5/powervm_os_ci.html.gz > > [3] I get that there's still value in running all those tests. But it > > could be done like once every 10 or 50 or 100 runs instead of every time. > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From skramaja at redhat.com Fri Sep 7 10:07:04 2018 From: skramaja at redhat.com (Saravanan KR) Date: Fri, 7 Sep 2018 15:37:04 +0530 Subject: [openstack-dev] [tripleo] VFs not configured in SR-IOV role In-Reply-To: References: Message-ID: Not sure which version you are using, but the service "OS::TripleO::Services::NeutronSriovHostConfig" is responsible for setting up VFs. Check if this service is enabled in the deployment. One of the missing place is being fixed - https://review.openstack.org/#/c/597985/ Regards, Saravanan KR On Tue, Sep 4, 2018 at 8:58 PM Samuel Monderer wrote: > > Hi, > > Attached is the used to deploy an overcloud with SR-IOV role. > The deployment completed successfully but the VFs aren't configured on the host. > Can anyone have a look at what I missed. > > Thanks > Samuel > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aj at suse.com Fri Sep 7 11:53:46 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 7 Sep 2018 13:53:46 +0200 Subject: [openstack-dev] [all][infra] Moving cover job from post to check pipeline In-Reply-To: <9a0076f2-8677-2d36-5dfd-f3a3ba8a0d26@suse.com> References: <9a0076f2-8677-2d36-5dfd-f3a3ba8a0d26@suse.com> Message-ID: On 2018-09-06 20:10, Andreas Jaeger wrote: > Citing Ian Wienand in [2] > > "There was a thread some time ago that suggested coverage jobs weren't > doing much in the "post" pipeline because nobody looks at them and the > change numbers may be difficult to find anyway [1]. This came up again > in a cleanup to add non-voting coverage jobs in > I5c42530d1dda41b8dc8c13cdb10458745bec7bcc. > > There really is no consistency across projects; it seems like a couple > of different approaches have been cargo-cult copied as new projects came > in, depending on which random project was used as a template. This > change does a cleanup by moving all post coverage jobs into the check > queue as non-voting jobs." Correcting: It's a voting job. Note that I pushed changes - using topic:update-zuul - to projects with that update and found a few broken cover jobs. Those were run as post jobs and always failed ;( Andreas > I've updated Ian's change [2] now and propose to move ahead with it - > and suggest that projects with in-repo coverage job follow it as well. > Let's use the new template [3] openstack-cover-jobs (and it's > -horizon/-neutron variants) for this change. > > Andreas > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2016-July/099491.html > [2] https://review.openstack.org/#/c/432836/ > [3] > https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-openstack-cover-jobs > > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From e0ne at e0ne.info Fri Sep 7 12:06:33 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Fri, 7 Sep 2018 15:06:33 +0300 Subject: [openstack-dev] [horizon] No meeting next week Message-ID: Hi team, A lot of us will attend PTG in Denver next week so we skip the meeting on 9/12. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Fri Sep 7 12:18:34 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 7 Sep 2018 07:18:34 -0500 Subject: [openstack-dev] Nominating Chris Dent for placement-core In-Reply-To: References: Message-ID: <640ffdf8-36d2-1f76-0b23-fad5fb42565a@fried.cc> After a week with only positive responses, it is my pleasure to add Chris to the placement-core team. Welcome home, Chris. On 08/31/2018 10:45 AM, Eric Fried wrote: > The openstack/placement project [1] and its core team [2] have been > established in gerrit. > > I hereby nominate Chris Dent for membership in the placement-core team. > He has been instrumental in the design, implementation, and stewardship > of the placement API since its inception and has shown clear and > consistent leadership. > > As we are effectively bootstrapping placement-core at this time, it > would seem appropriate to consider +1/-1 responses from heavy placement > contributors as well as existing cores (currently nova-core). > > [1] https://review.openstack.org/#/admin/projects/openstack/placement > [2] https://review.openstack.org/#/admin/groups/1936,members > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tobias.urdin at binero.se Fri Sep 7 12:40:56 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Fri, 7 Sep 2018 14:40:56 +0200 Subject: [openstack-dev] [puppet] Puppet weekly recap - week 35-36 Message-ID: <24279952-96d4-1902-49ec-965750bd416a@binero.se> Hello fellow Puppeteers! Welcome to the weekly Puppet recap for week 35 and 36. Because I was away last week I forgot to follow up the progress of week 35. The past two weeks has been quite calm, we have had a lot of changes due to the move away from Zuul config in project-config and the bump of the version for all modules on the master branch to prepare for the Stein cycle. MERGED CHANGES ============ ===== puppet-barbican ===== * 34e6f3a Add barbican::worker class The Barbican module now supports installation and management of the barbican-worker service. ===== puppet-cinder ===== * 3c634d2 Deprecate parameters that have been removed from cinder Config options for Cinder that was removed in Queens has been deprecated in the Puppet interface. ===== puppet-ironic ===== * f37d8f6 Add tests for oslo_messaging_notifications Added missing spec tests for the recent oslo messaging addition * 27bf3a0 Expose the notification_level paramenter Added the [DEFAULT]/notification_level configuration option ===== puppet-neutron ===== * 33f8cdc Notify about deprecated neutron::services::lbaas Neutron LBaaS is deprecated so a warning has been added * c70e4fa Add ensure_lbaas_package to neutron::server Added the ability to install the LBaaS plugin from the neutron::server class ===== puppet-nova ===== 393694a Add a release note for sync_power_state_interval parameter b4f3d6a compute: add sync_power_state_interval parameter Added the sync_power_state_interval configuration option for nova. ===== puppet-octavia ===== 2b83ae2 Add octavia::certificates::client_ca and data 45673ee Added missing DB params to init class e1531c3 Add Octavia API WSGI support d2a9586 Add octavia::neutron to configure nova section 6731e53 Add octavia::glance to configure glance section 9b285e7 Add missing options to octavia::certificates 6864cd0 Add octavia::nova to configure nova section 7d6bada Add workers support to octavia::worker class 6e7dacc Add api_v1_enabled and api_v2_enabled options 14c5257 Add allow_tls_terminated_listeners config option The Octavia module have had a lot of changes to make sure it's fit to be used. It now includes all the required classes and configuration options to use it. You can run the API in WSGI, enable/disable the v1 and v2 API, set if TLS listeners is allowed and separate the client and server CA certificates. ===== puppet-openstack-integration ===== * 1edb135 Remove tripleo-puppet-ci-centos-7-undercloud-oooq job Removed TripleO testing for non-containerized undercloud. * 9cb0e06 Higher timeout and two tries for update packages In the repos.pp file the upgrade packages call times out so we tried increasing the timeout and set tries to two (2) but it did not solve the issue. * 15dd562 Add barbican::worker to integration testing The newly added barbican::worker class is tested in the integration testing. ===== puppet-vswitch ===== * b6dab2e Fix the undefined method 'chomp' for nil:NilClass error seen with ovs 2.10 Fixed a bug where the output to stdout contained error messages the provider would fail to parse the values, it now ignores nil values. SPECS ===== No new specs. Only one open spec: *  Add parameter data types spec https://review.openstack.org/#/c/568929/ CI ===== There is some current issues with the CI, if anybody feels they have time we all would appreciate that you help us resolve it. * The update-packages call in repos.pp for openstack/puppet-openstack-integration times out on CentOS 7 (calling yum upgrade). This makes integration testing for puppet-openstacklib fail. * The stable/ocata and stable/pike branches has issues and are failing, this block most of the Zuul config retire from project-config patches that Doug has proposed, we need to resolve this by unblocking (fixing) the CI. This is probably backporting previous fixes to CI. I have been very busy with finalizing a project so I've not been able to look at this. I will have a look this weekend or hopefully next week but would appreciate any help. After CI is fixed we can merge all these: * Update Gemfile for stable/rocky (openstack/puppet-openstacklib) https://review.openstack.org/#/c/597087/ * make openstackclient package name configurable (openstack/puppet-openstacklib) https://review.openstack.org/#/c/599015/ * All the python3-first topic changes (named "import zuul job settings from project-config" and "switch documentation job to new PTI") NEEDS REVIEWS ========== * Make ironic password a secret https://review.openstack.org/#/c/598173/ * Remove usage of deprecated RamFilter https://review.openstack.org/#/c/597204/ * Add cinder::nova class to configure nova section https://review.openstack.org/#/c/600559/ * Cinder-type param is_public/access_project_ids https://review.openstack.org/#/c/600071/ * Remove ironic inspector dnsmasq bind-interfaces setting https://review.openstack.org/#/c/600068/ * Add octavia testing https://review.openstack.org/#/c/597600/ * Change beaker testing to use p-o-i manifest (depends on: Add octavia testing) https://review.openstack.org/#/c/598170/ * make openstackclient package name configurable --- WAITING FOR CI FIX https://review.openstack.org/#/c/599015/ OTHER ===== * If somebody want to push this feel free, we need packages to include the new wsgi.py file that Horizon is changing to and then get this changed in puppet-horizon. Django WSGI entrypoint change https://review.openstack.org/#/c/593608/ As always, if you need help, have questions or need reviews don't hesitate to head over to #puppet-openstack and ping us. Wishing you all a great weekend! Best regards Tobias From nagexiucai at qq.com Fri Sep 7 12:58:37 2018 From: nagexiucai at qq.com (=?ISO-8859-1?B?Qm9iLVhpdUNhaQ==?=) Date: Fri, 7 Sep 2018 20:58:37 +0800 Subject: [openstack-dev] it may be a bug of swift3 1.7.0 or ? Message-ID: Hi, Environment: openstack (liberty) swift swift3: 1.7.0 boto: 2.48.0 If bucket name has unicode characters such as Chinese words (`u'\u54e6\u76c6\u65af\u5766'`), boto will return "AccessDenied". See this (https://github.com/boto/boto3/issues/1693) for details, because i thought it was boto's fault at first. Regards! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Sep 7 13:19:23 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 08:19:23 -0500 Subject: [openstack-dev] [election][tc] TC Candidacy In-Reply-To: <165ade4d83b.eadca46557606.5598003762013755194@ghanshyammann.com> References: <165ade4d83b.eadca46557606.5598003762013755194@ghanshyammann.com> Message-ID: <0e5cb22c-d73f-d3f2-5358-9149a703ad26@gmail.com> On 9/6/2018 2:59 AM, Ghanshyam Mann wrote: > * Share Project teams work for Common Goals: Every cycle we have TC goals and some future direction where all the projects needs to start working. Projects try to do their best in that but big challenge for them is resource bandwidth. In Current situation, It is very hard for projects teams to accommodate those work by themselves. Project team are shrinking and key members are overloaded. My Idea is to form a temporary team of contributors under Goal champion and finish those common area work during starting of cycle (so that we can make sure to finish the work well on time and test throughout the cycle). That temporary team can be formed with volunteers from any project team or new part time contributors with help of OUI or FirstContact SIG etc. This is a good idea and something I personally would like to see the TC doing to move actual positive technical changes forward across OpenStack. -- Thanks, Matt From mriedemos at gmail.com Fri Sep 7 13:20:59 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 08:20:59 -0500 Subject: [openstack-dev] [election][tc] Announcing candidacy In-Reply-To: References: Message-ID: <8cd4c3ac-85c6-cac1-c1db-678b57634447@gmail.com> On 9/6/2018 1:49 PM, Rico Lin wrote: > * Cross-community integrations (K8s, CloudFoundry, Ceph, OPNFV) Are there some specific initiatives or deliverables you have in mind here, or just general open communication channels? It's very hard to gauge any kind of progress/success on the latter. -- Thanks, Matt From mriedemos at gmail.com Fri Sep 7 13:22:57 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 08:22:57 -0500 Subject: [openstack-dev] [election] [tc] TC candidacy In-Reply-To: References: Message-ID: <584d6c65-7b13-e0d1-842a-ebb9f7fb6290@gmail.com> On 9/5/2018 2:49 PM, Samuel Cassiba wrote: > Though my hands-on experience goes back several releases, I still view > things from the outside-looking-in perspective. Having the outsider > lens is crucial in the long-term for any consensus-driven group, > regardless of that consensus. > > Regardless of the election outcome, this is me taking steps to having a > larger involvement in the overall conversations that drive so much of > our daily lives. At the end of the day, we're all just groups of people > trying to do our jobs. I view this as an opportunity to give back to a > community that has given me so much. Are there specific initiatives you plan on pushing forward if on the TC? I'm thinking about stuff from the laundry list here: https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Other_Initiatives -- Thanks, Matt From mriedemos at gmail.com Fri Sep 7 13:26:35 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 08:26:35 -0500 Subject: [openstack-dev] [election] [tc] TC Candidacy In-Reply-To: References: Message-ID: On 9/5/2018 1:20 PM, Kristi Nikolla wrote: > I’m really excited to have the opportunity to take part in the discussion with > regards to the technical vision for OpenStack. Regardless of election outcome, > this is the first step towards a larger involvement from me in the important > discussions (no more shying away from the important mailing list threads.) I'm not trying to pick on you Kristi, but personally I'm tired of the TC vision question that's been going on for years now and would like the people I vote for to spend less time talking about OpenStack and what it is or what it isn't (because that changes based on the person you talk to and on what day you ask them), and spend more time figuring out how to move cross-project initiatives forward. So whether or not OpenStack is a toolkit for private/public/edge clouds, or a product, or something else, there are likely common themes within OpenStack that we can generally agree on across projects and need people to work on them, rather than just talk about doing them. Are there specific cross-project initiatives you are interested in working on? -- Thanks, Matt From mriedemos at gmail.com Fri Sep 7 13:26:58 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 08:26:58 -0500 Subject: [openstack-dev] [election] [tc] thank you In-Reply-To: References: <4671e7de-6155-a61d-1625-5487c7250e32@openstack.org> Message-ID: On 9/5/2018 10:03 AM, Anita Kuno wrote: > On 2018-09-05 04:01 AM, Thierry Carrez wrote: >> Emilien Macchi wrote: > >>> I personally feel like some rotation needs to happen > > A very honourable sentiment, Emilien. +1 -- Thanks, Matt From mriedemos at gmail.com Fri Sep 7 13:29:21 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 08:29:21 -0500 Subject: [openstack-dev] [election][tc] announcing candidacy In-Reply-To: References: Message-ID: On 9/4/2018 7:15 PM, Julia Kreger wrote: > The most specific thing that is weighing on my mind is elevating and > supporting contributors. While this is not new, I think we as a > community need to refocus on it because they are very fibers that make > up the fabric of our community and ultimately the electorate. Do you have specific *kinds* of contributors in mind here? Like are you mostly thinking new or part-time contributors, or are you also including long-time maintainers of the project, because let's not forget those are also contributors (usually in a large personal sacrificial way). Do you have specific ideas on how to elevate and support contributors? OR what do you see are the major issues not being addressed? Burn out? Contributor's backing companies not supporting them in some form? Other? -- Thanks, Matt From mriedemos at gmail.com Fri Sep 7 13:32:00 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 08:32:00 -0500 Subject: [openstack-dev] [election][tc] announcing candidacy In-Reply-To: <1536064192-sup-380@lrrr.local> References: <1536064192-sup-380@lrrr.local> Message-ID: <32030f59-dbc0-4482-294a-1b9ea038baa2@gmail.com> On 9/4/2018 7:30 AM, Doug Hellmann wrote: > I am announcing my candidacy for a position on the OpenStack > Technical Committee. > > I started contributing to OpenStack in 2012, not long after joining > Dreamhost, and I am currently employed by Red Hat to work on OpenStack > with a focus on long-term project concerns. I have served on the > Technical Committee for the last five years, including as Chair during > the last term. I have also been PTL of the Oslo and Release Management > teams at different points in the past. > > I have spent most of my time in all of those roles over the last few > years making incremental improvements in our ability to collaborate > while building OpenStack, including initiatives such as leading the > current community goal to run CI jobs under Python 3 by default [1]; > coordinating last year's documentation migration; and updating our > dependency management system to make it easier for projects to run > stand-alone. > > During my time serving as TC Chair, I have tried to update the way the > group works with the community. We started by performing a "health > check" for all of our project teams [2], as a way to spot potential > issues teams are experiencing that we can help with, and to encourage TC > members to learn more about teams they may not interact with on a > daily basis. We will be reviewing the results at the PTG [3], and > continuing to refine that process. > > I have also had a few opportunities this year to share our governance > structure with other communities [4][5]. It's exciting to be able to > talk to them about how the ideals and principles that hold our > community together can also apply to their projects. > > The OpenStack community continues to be the most welcoming group I > have interacted with in more than 25 years of contributing to open > source projects. I look forward to another opportunity to serve the > project through the Technical Committee over the coming year. > > Thank you, > Doug > > Candidacy submission: https://review.openstack.org/599582 > Review history: https://review.openstack.org/#/q/reviewer:2472,n,z > Commit history: https://review.openstack.org/#/q/owner:2472,n,z > Foundation Profile: > http://www.openstack.org/community/members/profile/359 > Freenode: dhellmann > Website: https://doughellmann.com > > [1] https://governance.openstack.org/tc/goals/stein/python3-first.html > [2] https://wiki.openstack.org/wiki/OpenStack_health_tracker > [3] https://etherpad.openstack.org/p/tc-stein-ptg > [4] https://doughellmann.com/blog/2018/08/21/planting-acorns/ > [5] https://www.python.org/dev/peps/pep-8002/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > I have generally been very cynical of repeat TC members, mostly because I don't know what they actually get done, but this candidacy email is a very nice example of specific issues that you've worked on and I really appreciate you being able to point out the things you've worked on while being on the TC. Thanks for pushing on this stuff Doug. -- Thanks, Matt From mriedemos at gmail.com Fri Sep 7 13:34:17 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 08:34:17 -0500 Subject: [openstack-dev] [election] [tc] TC candidacy In-Reply-To: References: Message-ID: On 9/5/2018 6:49 PM, Zhipeng Huang wrote: > I found that most of my statement for my last ran is still valid today > [0][1]. I want to build strong cross-community collaboration, best > practices for project level governance and more innovations for OpenStack. As I asked Rico, are there specific cross-community initiatives or deliverables you plan on working on, or just having open dialog? Because the latter doesn't mean much to me personally. If the former, can you point some out? -- Thanks, Matt From cdent+os at anticdent.org Fri Sep 7 13:35:49 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 7 Sep 2018 14:35:49 +0100 (BST) Subject: [openstack-dev] [placement] update 18-36 Message-ID: HTML: https://anticdent.org/placement-update-18-36.html Welcome back to the placement update. The last one was [5 weeks ago](https://anticdent.org/placement-update-18-31.html). I took a break to focus on some other things for a while. I plan to make it a regular thing again, but will be skipping next week for the PTG. The big news is that there is now a [placement repository](https://git.openstack.org/cgit/openstack/placement). That's the thing I was focussing on. [Work is progressing](https://review.openstack.org/#/q/project:openstack/placement) to get it healthy and happy. Because of that, henceforth the shape of this update will change a bit. If I'm able to find them, I'm going to try to include anything that directly relates to placement. Primarily this will be stuff in the placement repo itself, and related changes in nova, but hopefully it will also include work in Blazar, Cyborg, Neutron, Zun and other projects that are either already working with placement or planning to do so soon. I can't see everything though so if I miss something, please let me know. For this edition I'm not going to go out of my way to report on individual reviews, rather set the stage for the future. # Most Important If you're going to be at the PTG next week there will be plenty to talk about related to placement. * On Monday between 2-3pm Cyborg, Nova, and Placement -interested people will meet in the Cyborg room. * On Tuesday 10am it's with Blazar. * Sometime, maybe Tuesday afternoon (TBD), with Cinder. * Much of Wednesday: in the Nova room to discuss Placement (the service) and placement (the process) -related topics. The other pending issues are related to upgrades (from-nova, to-placement), migrating existing data, and management of schema migrations. Matt [posted a summary of some of that](http://lists.openstack.org/pipermail/openstack-dev/2018-September/134395.html) to get feedback from the wider community. # What's Changed openstack/placement Propose your changes to placement there, not nova. Nova still has placement code within itself, but for the time being the placement parts are [frozen](http://lists.openstack.org/pipermail/openstack-dev/2018-August/134042.html). # Bugs For now, bugs are still being tracked under nova using the tag `placement`. There will likely be some changes in this, but it works for now. There's also an etherpad where [cleanups and todos](https://etherpad.openstack.org/p/placement-extract-stein-3) are being remembered. * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 17. * [In progress placement bugs](https://goo.gl/vzGGDQ) 10. # Specs It's that time in the cycle, so let's have a specs section. This currently includes proposals in nova-specs (where placement-service-related specs will live for a while). In the future it will also have any other stuff I can find out there in the world. * Account for host agg allocation ratio in placement (Still in rocky/) * Placement: any traits in allocation_candidate query * Add subtree filter for GET /resource_providers * Network bandwidth resource provider * Resource provider - request group mapping in allocation candidate * Placement: support mixing required traits with any traits * VMware: place instances on resource pool (still in rocky/) * Standardize CPU resource tracking * Allow overcommit of dedicated CPU (Has an alternative which changes allocations to a float) * List resource providers having inventory * Bi-directional enforcement of traits * allow transferring ownership of instance * Placement model for passthrough devices * Propose counting quota usage from placement and API database (A bit out of date but may be worth resurrecting) # Main Themes We'll figure out what the main themes are next week at the PTG, once that happens this section will have more. In the meantime: ## Reshape Provider Trees Testing of the `/reshaper` from libvirt and xen drivers is showing some signs of success moving VGPU inventory from the compute node to a child provider. ## Consumer Generations There continues to be work in progress on the nova side to make best use of consumer generations. See: # Other The placement repo is currently small enough that looking at [all open patches](https://review.openstack.org/#/q/project:openstack/placement+status:open) isn't too overwhelming. Because of all the recent work with extraction, and because the PTG is next week I'm not up to date on what patches that are related to placement are in need of review. In the meantime if you want to go looking around, [anything with 'placement' in the commit mesage](https://review.openstack.org/#/q/message:placement+status:open) is fun. Next time I'll provide more detail. # End Thanks to everyone for getting placement this far. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From zhipengh512 at gmail.com Fri Sep 7 13:54:13 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 7 Sep 2018 21:54:13 +0800 Subject: [openstack-dev] [election] [tc] TC candidacy In-Reply-To: References: Message-ID: Thx Matt, I think as I described in the candidacy patch, there two specific areas that I would like to do the cross-community collaboration. One is related to the cyborg project where the team is working to build the open heterogenous resource mgmt platform. I would like to extend this mission over to kubernetes, which currently lack of such component and could benefit hugely from our work here in OpenStack. There are also other communities like OPNFV where the edge cloud project, the C-RAN project and Rocket project will be integrating and testing cyborg, as well as ONNX where AI models will meet the resource models we defined in Cyborg for NPUs and GPUs and FPGAs and whatever hardware should be chosen. The other one is policy which related to the Kubernetes Policy WG I'm leading and the CNCF Security WG which is under voting from ToC to take shape. We have great policy in code implementation in Keystone and I'm keen on investigating on how should that impact Kubernetes or Istio or SPIFEE when we stack them up. Of course there are other areas that i'm also working on which bridges communities, one such example is the cloud ledger idea proposed for public cloud wg involves collaboration with Ethereum Classic community, and hopefully Hyperledger and others in the near future. However this is a long term effort compared to the above mentioned two aspects. Hope that answers the question :) On Fri, Sep 7, 2018 at 9:34 PM Matt Riedemann wrote: > On 9/5/2018 6:49 PM, Zhipeng Huang wrote: > > I found that most of my statement for my last ran is still valid today > > [0][1]. I want to build strong cross-community collaboration, best > > practices for project level governance and more innovations for > OpenStack. > > As I asked Rico, are there specific cross-community initiatives or > deliverables you plan on working on, or just having open dialog? Because > the latter doesn't mean much to me personally. If the former, can you > point some out? > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Sep 7 14:12:41 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 7 Sep 2018 09:12:41 -0500 Subject: [openstack-dev] [keystone] 2018 User Survey Results Message-ID: The foundation just gave me a copy of the latest feedback from our users. I wanted to share this with the group so people have time to digest it prior to the PTG next week [0]. Here is the total count based on each response: Federated identity enhancements had *184* responses Performance improvements had *144* responses Scaling out to multiple regions had *136* responses Enhancing policy had *92* responses Per domain configuration had *79* responses Next Wednesday I have a time slot set aside to go through the results as a group. Otherwise we can use the time to refine the questions we present in the survey, since they haven't changed in years (I think Steve put the ones we have today in place). The script I used to count each occurrence is available [1] in case you recently received survey results and want to parse them in a similar fashion. [0] https://docs.google.com/spreadsheets/d/1wz-GOoFODGWrFuGqVWDunEWsuhC_lvRJLrfUybTj69Q/edit?usp=sharing [1] https://gist.github.com/lbragstad/a812df72494ffbbbc8c742f4d90333d5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Sep 7 14:13:51 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 09:13:51 -0500 Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction In-Reply-To: References: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> Message-ID: <4d429cb7-72ff-db2a-b6ea-4bdbc9c369d8@gmail.com> On 9/6/2018 8:29 PM, Erik McCormick wrote: > We are planning to attend the upgrade sessions on Monday as a group. How > about we put it there? I threw it in the upgrades sig ptg etherpad. Where it goes in the agenda on Monday afternoon is up to you guys. -- Thanks, Matt From mriedemos at gmail.com Fri Sep 7 14:17:55 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 09:17:55 -0500 Subject: [openstack-dev] [election] [tc] TC candidacy In-Reply-To: References: Message-ID: <09ea4107-9d1c-b9e1-0658-cf48580bc404@gmail.com> On 9/7/2018 8:54 AM, Zhipeng Huang wrote: > One is related to the cyborg project where the team is working to build > the open heterogenous resource mgmt platform. I would like to extend > this mission over to kubernetes, which currently lack of such component > and could benefit hugely from our work here in OpenStack. There are also > other communities like OPNFV where the edge cloud project, the C-RAN > project and Rocket project will be integrating and testing cyborg, as > well as ONNX where AI models will meet the resource models we defined in > Cyborg for NPUs and GPUs and FPGAs and whatever hardware should be chosen. I'd like to actually see some progress made on cyborg/nova integration before we get our hopes up about cyborg/something completely outside of openstack integration, but that's my biased view on it from being a nova person. See [1] for context from a discussion yesterday. I don't really know how the TC drives this more than the cyborg team themselves, but OK. [1] http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-09-06-14.00.log.html#l-120 -- Thanks, Matt From openstack at fried.cc Fri Sep 7 14:17:52 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 7 Sep 2018 09:17:52 -0500 Subject: [openstack-dev] [nova][placement] No NovaScheduler meeting during PTG Message-ID: Our regularly scheduled Monday nova-scheduler meeting will not take place next Monday, Sept 10th. We'll resume the following week. -efried From mark at stackhpc.com Fri Sep 7 14:21:52 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 7 Sep 2018 15:21:52 +0100 Subject: [openstack-dev] [Kolla] Denver PTG schedule In-Reply-To: References: Message-ID: Thanks for putting that together Eduardo. I've listed the sessions that I expect to attend below. Mark On Thu, 6 Sep 2018 at 17:40, Eduardo Gonzalez wrote: > Hi folks, > This is the schedule for Kolla Denver PTG. If someone have a hard conflict > with any discussion please let me know if we can find a slot which matches > better. > > Wednesday > 3:10 - 3:55 [kolla-ansible] DRY ansible > 4:00 - 4:45 [kolla-ansible] Kayobe > > Thursday > 9:50 - 10:35 [kolla-ansible] Firewall configuration > 10.40 - 11:15 [kolla-ansible] Fast-forward upgrade > 11:20 - 12:00 [kolla-ansible] Multi release support > 12:00 - 13:30 LUNCH > 1:30 - 2:15 [kolla-ansible] Cells v2 > 2:20 - 3:05 [kolla-ansible] Running kolla at scale > ? > 3:10 - 3:55 Kolla GUI > 4:00 - 4:45 PTG recap and Stein priority setting > > Friday > 9:00 - 9:45 [CI] Service testing and scenarios > 9:50 - 10:35 [CI] Upgrade jobs > 10.40 - 11:15 [CI] Usage of tempest and rally > 11:20 - 12:00 Define PTG TODOs (blueprints, specs, etc) > 12:00 - 13:30 LUNCH > > Regards > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Sep 7 14:26:50 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 7 Sep 2018 15:26:50 +0100 Subject: [openstack-dev] [Kolla] Denver PTG schedule In-Reply-To: References: Message-ID: Looks like the ironic and kolla rooms are next to each other this time, so I can hop between them to help setup the conferencing etc. On Fri, 7 Sep 2018 at 15:21, Mark Goddard wrote: > Thanks for putting that together Eduardo. I've listed the sessions that I > expect to attend below. > Mark > > On Thu, 6 Sep 2018 at 17:40, Eduardo Gonzalez wrote: > >> Hi folks, >> This is the schedule for Kolla Denver PTG. If someone have a hard >> conflict with any discussion please let me know if we can find a slot which >> matches better. >> >> Wednesday >> 3:10 - 3:55 [kolla-ansible] DRY ansible >> 4:00 - 4:45 [kolla-ansible] Kayobe >> >> Thursday >> 9:50 - 10:35 [kolla-ansible] Firewall configuration >> 10.40 - 11:15 [kolla-ansible] Fast-forward upgrade >> 11:20 - 12:00 [kolla-ansible] Multi release support >> 12:00 - 13:30 LUNCH >> 1:30 - 2:15 [kolla-ansible] Cells v2 >> 2:20 - 3:05 [kolla-ansible] Running kolla at scale >> ? >> 3:10 - 3:55 Kolla GUI >> 4:00 - 4:45 PTG recap and Stein priority setting >> >> Friday >> 9:00 - 9:45 [CI] Service testing and scenarios >> 9:50 - 10:35 [CI] Upgrade jobs >> 10.40 - 11:15 [CI] Usage of tempest and rally >> 11:20 - 12:00 Define PTG TODOs (blueprints, specs, etc) >> 12:00 - 13:30 LUNCH >> >> Regards >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Sep 7 14:27:45 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 09:27:45 -0500 Subject: [openstack-dev] [nova] [placement] modified devstack using openstack/placement In-Reply-To: References: Message-ID: <5d68fd7b-80f4-182d-1a64-17365951def2@gmail.com> On 9/6/2018 5:05 AM, Chris Dent wrote: > One question I have on the lib/placement changes in devstack: Is it > useful to make those changes be guarded by a conditional of the > form: > >    if placement came from its own repo: >        do the new stuff >    else: >        do the old stuff > > ? I think it would be mostly confusing if this is conditional/configurable. For example, if the nova-next job was changed to use placement from the placement repo, but the integrated gate jobs (tempest-full) were still all using placement from nova. I think we need to get to the point where we're ready to flip that switch to CI against the placement repo and then deal with the fallout. -- Thanks, Matt From zhipengh512 at gmail.com Fri Sep 7 14:29:41 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 7 Sep 2018 22:29:41 +0800 Subject: [openstack-dev] [election] [tc] TC candidacy In-Reply-To: <09ea4107-9d1c-b9e1-0658-cf48580bc404@gmail.com> References: <09ea4107-9d1c-b9e1-0658-cf48580bc404@gmail.com> Message-ID: Well nova-cyborg is surely on the top-priority for OpenStack cross-project collaboration. The two initiatives I mentioned is more in the field of cross-community. I think I didn't elaborate on how the TC role fit in this picture. For TC I think it is important to be able to help on the cross community collaboration, one community is intimidating enough, let alone venture into totally different communities. With that said, other than the two directions that I will personally involve myself with, I will also help other cross community ideas/initiatives to build relationship and get work done. I guess it is more convincing when you as a TC member have actually skin in the game in cross-community development. So yes individual team will probably the best ones to drive such collaborations, but it would also be nice to have TC lend a hand when there is need :) On Fri, Sep 7, 2018 at 10:17 PM Matt Riedemann wrote: > On 9/7/2018 8:54 AM, Zhipeng Huang wrote: > > One is related to the cyborg project where the team is working to build > > the open heterogenous resource mgmt platform. I would like to extend > > this mission over to kubernetes, which currently lack of such component > > and could benefit hugely from our work here in OpenStack. There are also > > other communities like OPNFV where the edge cloud project, the C-RAN > > project and Rocket project will be integrating and testing cyborg, as > > well as ONNX where AI models will meet the resource models we defined in > > Cyborg for NPUs and GPUs and FPGAs and whatever hardware should be > chosen. > > I'd like to actually see some progress made on cyborg/nova integration > before we get our hopes up about cyborg/something completely outside of > openstack integration, but that's my biased view on it from being a nova > person. See [1] for context from a discussion yesterday. I don't really > know how the TC drives this more than the cyborg team themselves, but OK. > > [1] > > http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-09-06-14.00.log.html#l-120 > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Fri Sep 7 14:30:23 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 7 Sep 2018 09:30:23 -0500 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][Telemetry] all requirement updates are currently blocked In-Reply-To: References: <20180907041847.GF31148@thor.bakeyournoodle.com> Message-ID: <20180907143023.yqsvc4fcatflpr23@mthode.org> On 18-09-07 11:09:15, Julien Danjou wrote: > On Fri, Sep 07 2018, Tony Breeds wrote: > > > On Thu, Sep 06, 2018 at 01:33:12PM +0300, Michel Peterson wrote: > > > >> I remember that before landing the problematic patch [1] there was some > >> discussion around it. Basically the problem was not n-odl but ceilometer > >> not being in pypi, but we never foresaw this problem. > >> > >> Now that the problem is so critical, the question is how can we, from the > >> n-odl team, help in fixing this? I am open to help in any effort that > >> involves n-odl or any other project. > > > > As other have pointed out we can just ask the Telemetry team (PTL on CC) > > why we can't publish ceilometer to pypi? > > You can, I've already said +1 on a review a few weeks ago. :) > Mind linking? I can't find it. > > https://pypi.org/project/ceilometer/ certainly seems to be the right > > project. > > > > The crux of the code issue is: > > from ceilometer.network.statistics import driver > > > > in networking_odl/ceilometer/network/statistics/opendaylight_v2/driver.py > > > > If this is supposed to be used they way you are how are prjects supposed > > to get the ceilometer code? > > -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Fri Sep 7 14:58:44 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 07 Sep 2018 10:58:44 -0400 Subject: [openstack-dev] [Release-job-failures] Release of openstack/networking-ansible failed In-Reply-To: References: Message-ID: <1536332259-sup-5495@lrrr.local> Excerpts from zuul's message of 2018-09-07 12:05:20 +0000: > Build failed. > > - release-openstack-python http://logs.openstack.org/63/639a3c3590ec20c33b1435e960d5331780298915/release/release-openstack-python/f485e0f/ : POST_FAILURE in 7m 15s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > - trigger-readthedocs-webhook http://logs.openstack.org/63/639a3c3590ec20c33b1435e960d5331780298915/release/trigger-readthedocs-webhook/b1705ca/ : FAILURE in 1m 33s > Based on the error at [1], it looks like someone is manually publishing releases from openstack/networking-ansible and then tagging them, instead of letting the release machinery publish based on the tag. Doug [1] http://logs.openstack.org/63/639a3c3590ec20c33b1435e960d5331780298915/release/release-openstack-python/f485e0f/job-output.txt.gz#_2018-09-07_12_04_51_151414 From dms at danplanet.com Fri Sep 7 15:17:56 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 07 Sep 2018 08:17:56 -0700 Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction References: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> Message-ID: > The other obvious thing is the database. The placement repo code as-is > today still has the check for whether or not it should use the > placement database but falls back to using the nova_api database > [5]. So technically you could point the extracted placement at the > same nova_api database and it should work. However, at some point > deployers will clearly need to copy the placement-related tables out > of the nova_api DB to a new placement DB and make sure the > 'migrate_version' table is dropped so that placement DB schema > versions can reset to 1. I think it's wrong to act like placement and nova-api schemas are the same. One is a clone of the other at a point in time, and technically it will work today. However the placement db sync tool won't do the right thing, and I think we run the major risk of operators not fully grokking what is going on here, seeing that pointing placement at nova-api "works" and move on. Later, when we add the next placement db migration (which could technically happen in stein), they will either screw their nova-api schema, or mess up their versioning, or be unable to apply the placement change. > With respect to grenade and making this work in our own upgrade CI > testing, we have I think two options (which might not be mutually > exclusive): > > 1. Make placement support using nova.conf if placement.conf isn't > found for Stein with lots of big warnings that it's going away in > T. Then Rocky nova.conf with the nova_api database configuration just > continues to work for placement in Stein. I don't think we then have > any grenade changes to make, at least in Stein for upgrading *from* > Rocky. Assuming fresh devstack installs in Stein use placement.conf > and a placement-specific database, then upgrades from Stein to T > should also be OK with respect to grenade, but likely punts the > cut-over issue for all other deployment projects (because we don't CI > with grenade doing Rocky->Stein->T, or FFU in other words). As I have said above and in the review, I really think this is the wrong approach. At the current point of time, the placement schema is a clone of the nova-api schema, and technically they will work. At the first point that placement evolves its schema, that will no longer be a workable solution, unless we also evolve nova-api's database in lockstep. > 2. If placement doesn't support nova.conf in Stein, then grenade will > require an (exceptional) [6] from-rocky upgrade script which will (a) > write out placement.conf fresh and (b) run a DB migration script, > likely housed in the placement repo, to create the placement database > and copy the placement-specific tables out of the nova_api > database. Any script like this is likely needed regardless of what we > do in grenade because deployers will need to eventually do this once > placement would drop support for using nova.conf (if we went with > option 1). Yep, and I'm asserting that we should write that script, make grenade do that step, and confirm that it works. I think operators should do that step during the stein upgrade because that's where the fork/split of history and schema is happening. I'll volunteer to do the grenade side at least. Maybe it would help to call out specifically that, IMHO, this can not and should not follow the typical config deprecation process. It's not a simple case of just making sure we "find" the nova-api database in the various configs. The problem is that _after_ the split, they are _not_ the same thing and should not be considered as the same. Thus, I think to avoid major disaster and major time sink for operators later, we need to impose the minor effort now to make sure that they don't take the process of deploying a new service lightly. Jay's original relatively small concern was that deploying a new placement service and failing to properly configure it would result in a placement running with the default, empty, sqlite database. That's a valid concern, and I think all we need to do is make sure we fail in that case, explaining the situation. We just had a hangout on the topic and I think we've come around to the consensus that just removing the default-to-empty-sqlite behavior is the right thing to do. Placement won't magically find nova.conf if it exists and jump into its database, and it also won't do the silly thing of starting up with an empty database if the very important config step is missed in the process of deploying placement itself. Operators will have to deploy the new package and do the database surgery (which we will provide instructions and a script for) as part of that process, but there's really no other sane alternative without changing the current agreed-to plan regarding the split. Is everyone okay with the above summary of the outcome? --Dan From corey.bryant at canonical.com Fri Sep 7 15:18:37 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 7 Sep 2018 11:18:37 -0400 Subject: [openstack-dev] [Openstack] OpenStack Rocky for Ubuntu 18.04 LTS Message-ID: The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Rocky on Ubuntu 18.04 LTS via the Ubuntu Cloud Archive. Details of the Rocky release can be found at: https://www.openstack.org/software/rocky To get access to the Ubuntu Rocky packages: Ubuntu 18.04 LTS ----------------------- You can enable the Ubuntu Cloud Archive pocket for OpenStack Rocky on Ubuntu 18.04 installations by running the following commands: sudo add-apt-repository cloud-archive:rocky sudo apt update The Ubuntu Cloud Archive for Rocky includes updates for: aodh, barbican, ceilometer, ceph (13.2.1), cinder, designate, designate-dashboard, glance, gnocchi, heat, heat-dashboard, horizon, ironic, keystone, magnum, manila, manila-ui, mistral, murano, murano-dashboard, networking-bagpipe, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-odl, networking-ovn, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, neutron-vpnaas, nova, nova-lxd, octavia, openstack-trove, openvswitch (2.10.0), panko, sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, and zaqar. For a full list of packages and versions, please refer to: http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/rocky_versions.html Python 3 support --------------------- Python 3 packages are now available for all of the above packages except swift. All of these packages have successfully been unit tested with at least Python 3.6. Function testing is ongoing and fixes will continue to be backported to Rocky. Python 3 enablement -------------------------- In Rocky, Python 2 packages will still be installed by default for all packages except gnocchi and octavia, which are Python 3 by default. In a future release, we will switch all packages to Python 3 by default. To enable Python 3 for existing installations: # upgrade to latest Rocky package versions first, then: sudo apt install python3- [1] sudo apt install libapache2-mod-wsgi-py3 # not required for all packages [2] sudo apt purge python- [1] sudo apt autoremove --purge sudo systemctl restart -* sudo systemctl restart apache2 # not required for all packages [2] For example: sudo apt install aodh-* sudo apt install python3-aodh libapache2-mod-wsgi-py3 sudo apt purge python-aodh sudo apt autoremove --purge sudo systemctl restart aodh-* apache2 To enable Python 3 for new installations: sudo apt install python3- [1] sudo apt install libapache2-mod-wsgi-py3 # not required for all packages [2] sudo apt install - For example: sudo apt install python3-aodh libapache2-mod-wsgi-py3 aodh-api [1] The naming convention of python packages is generally python- and python3-. For horizon, however, the packages are named python-django-horizon and python3-django-horizon. [2] The following packages are run under apache2 and require installation of libapache2-mod-wsgi-py3 to enable Python 3 support: aodh-api, cinder-api, barbican-api, keystone, nova-placement-api, openstack-dashboard, panko-api, sahara-api Other notable changes ---------------------------- sahara-api: sahara API now runs under apache2 with mod_wsgi Branch Package Builds ----------------------------- If you would like to try out the latest updates to branches, we deliver continuously integrated packages on each upstream commit via the following PPA’s: sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata sudo add-apt-repository ppa:openstack-ubuntu-testing/pike sudo add-apt-repository ppa:openstack-ubuntu-testing/queens sudo add-apt-repository ppa:openstack-ubuntu-testing/rocky Reporting bugs ------------------- If you have any issues please report bugs using the 'ubuntu-bug' tool to ensure that bugs get logged in the right place in Launchpad: sudo ubuntu-bug nova-conductor Thanks to everyone who has contributed to OpenStack Rocky, both upstream and downstream. Special thanks to the Puppet OpenStack modules team and the OpenStack Charms team for their continued early testing of the Ubuntu Cloud Archive, as well as the Ubuntu and Debian OpenStack teams for all of their contributions. Have fun and see you in Stein! Cheers, Corey (on behalf of the Ubuntu OpenStack team) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Fri Sep 7 15:24:51 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 7 Sep 2018 11:24:51 -0400 Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction In-Reply-To: References: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> Message-ID: On Fri, Sep 7, 2018 at 11:18 AM Dan Smith wrote: > > > The other obvious thing is the database. The placement repo code as-is > > today still has the check for whether or not it should use the > > placement database but falls back to using the nova_api database > > [5]. So technically you could point the extracted placement at the > > same nova_api database and it should work. However, at some point > > deployers will clearly need to copy the placement-related tables out > > of the nova_api DB to a new placement DB and make sure the > > 'migrate_version' table is dropped so that placement DB schema > > versions can reset to 1. > > I think it's wrong to act like placement and nova-api schemas are the > same. One is a clone of the other at a point in time, and technically it > will work today. However the placement db sync tool won't do the right > thing, and I think we run the major risk of operators not fully grokking > what is going on here, seeing that pointing placement at nova-api > "works" and move on. Later, when we add the next placement db migration > (which could technically happen in stein), they will either screw their > nova-api schema, or mess up their versioning, or be unable to apply the > placement change. > > > With respect to grenade and making this work in our own upgrade CI > > testing, we have I think two options (which might not be mutually > > exclusive): > > > > 1. Make placement support using nova.conf if placement.conf isn't > > found for Stein with lots of big warnings that it's going away in > > T. Then Rocky nova.conf with the nova_api database configuration just > > continues to work for placement in Stein. I don't think we then have > > any grenade changes to make, at least in Stein for upgrading *from* > > Rocky. Assuming fresh devstack installs in Stein use placement.conf > > and a placement-specific database, then upgrades from Stein to T > > should also be OK with respect to grenade, but likely punts the > > cut-over issue for all other deployment projects (because we don't CI > > with grenade doing Rocky->Stein->T, or FFU in other words). > > As I have said above and in the review, I really think this is the wrong > approach. At the current point of time, the placement schema is a clone > of the nova-api schema, and technically they will work. At the first point > that placement evolves its schema, that will no longer be a workable > solution, unless we also evolve nova-api's database in lockstep. > > > 2. If placement doesn't support nova.conf in Stein, then grenade will > > require an (exceptional) [6] from-rocky upgrade script which will (a) > > write out placement.conf fresh and (b) run a DB migration script, > > likely housed in the placement repo, to create the placement database > > and copy the placement-specific tables out of the nova_api > > database. Any script like this is likely needed regardless of what we > > do in grenade because deployers will need to eventually do this once > > placement would drop support for using nova.conf (if we went with > > option 1). > > Yep, and I'm asserting that we should write that script, make grenade do > that step, and confirm that it works. I think operators should do that > step during the stein upgrade because that's where the fork/split of > history and schema is happening. I'll volunteer to do the grenade side > at least. > > Maybe it would help to call out specifically that, IMHO, this can not > and should not follow the typical config deprecation process. It's not a > simple case of just making sure we "find" the nova-api database in the > various configs. The problem is that _after_ the split, they are _not_ > the same thing and should not be considered as the same. Thus, I think > to avoid major disaster and major time sink for operators later, we need > to impose the minor effort now to make sure that they don't take the > process of deploying a new service lightly. I think that's a valid different approach. I'd be okay with this if the appropriate scripts and documentation is out there. In this case, Grenade stuff will be really useful asset to look over upgrades with. > Jay's original relatively small concern was that deploying a new > placement service and failing to properly configure it would result in a > placement running with the default, empty, sqlite database. That's a > valid concern, and I think all we need to do is make sure we fail in > that case, explaining the situation. If it's a hard fail, that seems reasonable and ensures no surprises occur during the upgrade or much later. > We just had a hangout on the topic and I think we've come around to the > consensus that just removing the default-to-empty-sqlite behavior is the > right thing to do. Placement won't magically find nova.conf if it exists > and jump into its database, and it also won't do the silly thing of > starting up with an empty database if the very important config step is > missed in the process of deploying placement itself. Operators will have > to deploy the new package and do the database surgery (which we will > provide instructions and a script for) as part of that process, but > there's really no other sane alternative without changing the current > agreed-to plan regarding the split. > > Is everyone okay with the above summary of the outcome? I've dropped my -1 from this given the discussion https://review.openstack.org/#/c/600157/ > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From edmondsw at us.ibm.com Fri Sep 7 15:25:14 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Fri, 7 Sep 2018 11:25:14 -0400 Subject: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests In-Reply-To: <165b2ae50f1.11ea66ed588256.3160647352194636247@ghanshyammann.com> References: <11be89ad-a59a-1fe6-5c7b-badb4a06e643@fried.cc> <1b586dfd-594f-3f44-b6f3-8b232aa0ab5b@fried.cc> <165b2ae50f1.11ea66ed588256.3160647352194636247@ghanshyammann.com> Message-ID: Ghanshyam Mann wrote on 09/07/2018 02:18:13 AM: snip.. > neutron-tempest-plugin or other service test you can always avoid to > run with regex. And i do not think compute negative or DB test will > take much time to run. But still if you want to avoid to run then, I > think it is easy to maintain a whitelist regex file on CI side which > can run only compute driver tests(61 in this case). > > Tagging compute driver on tempest side is little hard to maintain i > feel and it can goes out of date very easily. If you have any other > idea on that, we can surly talk in PTG on this. The concern that I have with whitelisting in a given CI is that it has to be done independently in every compute driver CI. So while I agree that it won't be easy to maintain tagging for compute driver on the tempest side, at least that's one place / easier than doing it in every driver CI. When anyone figures out that an change is needed, all of the CIs would benefit together if there is a shared solution. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Fri Sep 7 15:34:08 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 7 Sep 2018 10:34:08 -0500 Subject: [openstack-dev] [cinder][placement] Room Scheduled for Cinder Placement Discussion ... Message-ID: <9892eeee-a772-e32a-cd9e-81a6c0ae8dba@gmail.com> All, The results of the Doodle poll suggested that the end of the day Tuesday was the best option for us all to get together. [1] I have scheduled the Big Thompson Room on Tuesday from 15:15 to 17:00. I hope we can all get together there and then to have a good discussion. Thanks! Jay [1] https://doodle.com/poll/4twwhy46bxerrthx#table From jungleboyj at gmail.com Fri Sep 7 15:38:14 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 7 Sep 2018 10:38:14 -0500 Subject: [openstack-dev] [cinder][ptg] Topics scheduled for next week ... Message-ID: Team, I have created an etherpad for each of the days of the PTG and split out the proposed topics from the planning etherpad into the individual days for discussion: [1] [2] [3] If you want to add an additional topic please add it to Friday or find some time on one of the other days. I look forward to discussing all these topics with you all next week. Thanks! Jay [1] https://etherpad.openstack.org/p/cinder-ptg-stein-wednesday [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday [3] https://etherpad.openstack.org/p/cinder-ptg-stein-friday From jaypipes at gmail.com Fri Sep 7 15:40:19 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 7 Sep 2018 11:40:19 -0400 Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction In-Reply-To: References: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> Message-ID: On 09/07/2018 11:17 AM, Dan Smith wrote: >> The other obvious thing is the database. The placement repo code as-is >> today still has the check for whether or not it should use the >> placement database but falls back to using the nova_api database >> [5]. So technically you could point the extracted placement at the >> same nova_api database and it should work. However, at some point >> deployers will clearly need to copy the placement-related tables out >> of the nova_api DB to a new placement DB and make sure the >> 'migrate_version' table is dropped so that placement DB schema >> versions can reset to 1. > > I think it's wrong to act like placement and nova-api schemas are the > same. One is a clone of the other at a point in time, and technically it > will work today. However the placement db sync tool won't do the right > thing, and I think we run the major risk of operators not fully grokking > what is going on here, seeing that pointing placement at nova-api > "works" and move on. Later, when we add the next placement db migration > (which could technically happen in stein), they will either screw their > nova-api schema, or mess up their versioning, or be unable to apply the > placement change. > >> With respect to grenade and making this work in our own upgrade CI >> testing, we have I think two options (which might not be mutually >> exclusive): >> >> 1. Make placement support using nova.conf if placement.conf isn't >> found for Stein with lots of big warnings that it's going away in >> T. Then Rocky nova.conf with the nova_api database configuration just >> continues to work for placement in Stein. I don't think we then have >> any grenade changes to make, at least in Stein for upgrading *from* >> Rocky. Assuming fresh devstack installs in Stein use placement.conf >> and a placement-specific database, then upgrades from Stein to T >> should also be OK with respect to grenade, but likely punts the >> cut-over issue for all other deployment projects (because we don't CI >> with grenade doing Rocky->Stein->T, or FFU in other words). > > As I have said above and in the review, I really think this is the wrong > approach. At the current point of time, the placement schema is a clone > of the nova-api schema, and technically they will work. At the first point > that placement evolves its schema, that will no longer be a workable > solution, unless we also evolve nova-api's database in lockstep. > >> 2. If placement doesn't support nova.conf in Stein, then grenade will >> require an (exceptional) [6] from-rocky upgrade script which will (a) >> write out placement.conf fresh and (b) run a DB migration script, >> likely housed in the placement repo, to create the placement database >> and copy the placement-specific tables out of the nova_api >> database. Any script like this is likely needed regardless of what we >> do in grenade because deployers will need to eventually do this once >> placement would drop support for using nova.conf (if we went with >> option 1). > > Yep, and I'm asserting that we should write that script, make grenade do > that step, and confirm that it works. I think operators should do that > step during the stein upgrade because that's where the fork/split of > history and schema is happening. I'll volunteer to do the grenade side > at least. > > Maybe it would help to call out specifically that, IMHO, this can not > and should not follow the typical config deprecation process. It's not a > simple case of just making sure we "find" the nova-api database in the > various configs. The problem is that _after_ the split, they are _not_ > the same thing and should not be considered as the same. Thus, I think > to avoid major disaster and major time sink for operators later, we need > to impose the minor effort now to make sure that they don't take the > process of deploying a new service lightly. > > Jay's original relatively small concern was that deploying a new > placement service and failing to properly configure it would result in a > placement running with the default, empty, sqlite database. That's a > valid concern, and I think all we need to do is make sure we fail in > that case, explaining the situation. > > We just had a hangout on the topic and I think we've come around to the > consensus that just removing the default-to-empty-sqlite behavior is the > right thing to do. Placement won't magically find nova.conf if it exists > and jump into its database, and it also won't do the silly thing of > starting up with an empty database if the very important config step is > missed in the process of deploying placement itself. Operators will have > to deploy the new package and do the database surgery (which we will > provide instructions and a script for) as part of that process, but > there's really no other sane alternative without changing the current > agreed-to plan regarding the split. > > Is everyone okay with the above summary of the outcome? Yes from my perspective. -jay From johnsomor at gmail.com Fri Sep 7 15:41:36 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 7 Sep 2018 08:41:36 -0700 Subject: [openstack-dev] [Openstack] OpenStack Rocky for Ubuntu 18.04 LTS In-Reply-To: References: Message-ID: Corey, Awesome! Excited to see Octavia included in the release. Michael On Fri, Sep 7, 2018 at 8:19 AM Corey Bryant wrote: > > The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Rocky on Ubuntu 18.04 LTS via the Ubuntu Cloud Archive. Details of the Rocky release can be found at: https://www.openstack.org/software/rocky > > To get access to the Ubuntu Rocky packages: > > Ubuntu 18.04 LTS > ----------------------- > > You can enable the Ubuntu Cloud Archive pocket for OpenStack Rocky on Ubuntu 18.04 installations by running the following commands: > > sudo add-apt-repository cloud-archive:rocky > sudo apt update > > The Ubuntu Cloud Archive for Rocky includes updates for: > > aodh, barbican, ceilometer, ceph (13.2.1), cinder, designate, designate-dashboard, glance, gnocchi, heat, heat-dashboard, horizon, ironic, keystone, magnum, manila, manila-ui, mistral, murano, murano-dashboard, networking-bagpipe, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-odl, networking-ovn, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, neutron-vpnaas, nova, nova-lxd, octavia, openstack-trove, openvswitch (2.10.0), panko, sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, and zaqar. > > For a full list of packages and versions, please refer to: > http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/rocky_versions.html > > Python 3 support > --------------------- > Python 3 packages are now available for all of the above packages except swift. All of these packages have successfully been unit tested with at least Python 3.6. Function testing is ongoing and fixes will continue to be backported to Rocky. > > Python 3 enablement > -------------------------- > In Rocky, Python 2 packages will still be installed by default for all packages except gnocchi and octavia, which are Python 3 by default. In a future release, we will switch all packages to Python 3 by default. > > To enable Python 3 for existing installations: > # upgrade to latest Rocky package versions first, then: > sudo apt install python3- [1] > sudo apt install libapache2-mod-wsgi-py3 # not required for all packages [2] > sudo apt purge python- [1] > sudo apt autoremove --purge > sudo systemctl restart -* > sudo systemctl restart apache2 # not required for all packages [2] > > For example: > sudo apt install aodh-* > sudo apt install python3-aodh libapache2-mod-wsgi-py3 > sudo apt purge python-aodh > sudo apt autoremove --purge > sudo systemctl restart aodh-* apache2 > > To enable Python 3 for new installations: > sudo apt install python3- [1] > sudo apt install libapache2-mod-wsgi-py3 # not required for all packages [2] > sudo apt install - > > For example: > sudo apt install python3-aodh libapache2-mod-wsgi-py3 aodh-api > > [1] The naming convention of python packages is generally python- and python3-. For horizon, however, the packages are named python-django-horizon and python3-django-horizon. > > [2] The following packages are run under apache2 and require installation of libapache2-mod-wsgi-py3 to enable Python 3 support: > aodh-api, cinder-api, barbican-api, keystone, nova-placement-api, openstack-dashboard, panko-api, sahara-api > > Other notable changes > ---------------------------- > sahara-api: sahara API now runs under apache2 with mod_wsgi > > Branch Package Builds > ----------------------------- > If you would like to try out the latest updates to branches, we deliver continuously integrated packages on each upstream commit via the following PPA’s: > > sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka > sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata > sudo add-apt-repository ppa:openstack-ubuntu-testing/pike > sudo add-apt-repository ppa:openstack-ubuntu-testing/queens > sudo add-apt-repository ppa:openstack-ubuntu-testing/rocky > > Reporting bugs > ------------------- > If you have any issues please report bugs using the 'ubuntu-bug' tool to ensure that bugs get logged in the right place in Launchpad: > > sudo ubuntu-bug nova-conductor > > Thanks to everyone who has contributed to OpenStack Rocky, both upstream and downstream. Special thanks to the Puppet OpenStack modules team and the OpenStack Charms team for their continued early testing of the Ubuntu Cloud Archive, as well as the Ubuntu and Debian OpenStack teams for all of their contributions. > > Have fun and see you in Stein! > > Cheers, > Corey > (on behalf of the Ubuntu OpenStack team) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Fri Sep 7 15:42:28 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 07 Sep 2018 11:42:28 -0400 Subject: [openstack-dev] [goals][python3][ptl] ptg discussions about python3 goal Message-ID: <1536334272-sup-371@lrrr.local> Based on some feedback for goal champions from the last PTG, I thought it would be a good idea to be explicit about the best way to reach me in case your team has questions about the python3 goal (unfortunately I'm the only champion for that goal who is able to be at the PTG this time). There is an "ask me anything" help room available on Monday and Tuesday, but looking at the list of other things going on then I'm likely to have a pretty full schedule those two days. I'm booked all day on Friday, so it's going to work best for you to let me know where and when on Wednesday or Thursday you would like to talk, and I will come to you. Try to give me more than a few minutes notice. :-) If you email me directly at doug at doughellmann.com it will go to my phone. Twitter mentions or DMs @doughellmann will also go to my phone. IRC mentions (dhellmann in #openstack-dev) will rely on me being online, which is less likely if I'm in a session. Doug From s at cassiba.com Fri Sep 7 15:55:56 2018 From: s at cassiba.com (Samuel Cassiba) Date: Fri, 7 Sep 2018 08:55:56 -0700 Subject: [openstack-dev] [election] [tc] TC candidacy In-Reply-To: <584d6c65-7b13-e0d1-842a-ebb9f7fb6290@gmail.com> References: <584d6c65-7b13-e0d1-842a-ebb9f7fb6290@gmail.com> Message-ID: On Fri, Sep 7, 2018 at 6:22 AM, Matt Riedemann wrote: > On 9/5/2018 2:49 PM, Samuel Cassiba wrote: >> >> Though my hands-on experience goes back several releases, I still view >> things from the outside-looking-in perspective. Having the outsider >> lens is crucial in the long-term for any consensus-driven group, >> regardless of that consensus. >> >> Regardless of the election outcome, this is me taking steps to having a >> larger involvement in the overall conversations that drive so much of >> our daily lives. At the end of the day, we're all just groups of people >> trying to do our jobs. I view this as an opportunity to give back to a >> community that has given me so much. > > > Are there specific initiatives you plan on pushing forward if on the TC? I'm > thinking about stuff from the laundry list here: > > https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Other_Initiatives > Excellent question! It's not in my nature to push specific agendas. That said, being in the deploy space, constellations is something that does have a specific gravity that would, no doubt, draw me in, whether or not I am part of the TC. I've viewed projects in the deploy space, such aq Furthering the adoption of secret management is another thing that hits close to home From lbragstad at gmail.com Fri Sep 7 16:05:53 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 7 Sep 2018 11:05:53 -0500 Subject: [openstack-dev] [stable][keystone] python3 goal progress and tox_install.sh removal In-Reply-To: <20180907073851.GA16495@thor.bakeyournoodle.com> References: <20180907073851.GA16495@thor.bakeyournoodle.com> Message-ID: Thanks for all the help, everyone. Updating the status of reach repository and branch with respect to the python3 goal and which reviews are needed in order to get things squared away. Note that the linked python3 review is just the one to port the zuul job definitions, and not all patches generated for the goal. This is because the first patch was triggering the failure - likely due to the branch being broken by tox_install.sh or new pip versions among other things. The summary below is a list of things needed to get the tests passing up to that point, at which point we should be in a good state to pursue python3 issues if there are any. Branches in red and bold are in need of reviews, all of which should be setup to pass tests. If not then they should be dependent on patches to make them pass. *keystonemiddleware* - master: https://review.openstack.org/#/c/597659/ - *stable/rocky*: https://review.openstack.org/#/c/597694/ - *stable/queens*: https://review.openstack.org/#/c/597688/ - *stable/pike*: https://review.openstack.org/#/c/597682/ - *stable/ocata*: https://review.openstack.org/#/c/597677/ *keystoneauth* - master: https://review.openstack.org/#/c/597655/ - *stable/rocky*: https://review.openstack.org/#/c/597693/ - *stable/queens*: https://review.openstack.org/#/c/600564/ needed by https://review.openstack.org/#/c/597687/ - *stable/pike*: https://review.openstack.org/#/c/597681/ - *stable/ocata*: https://review.openstack.org/#/c/598346/ needed by https://review.openstack.org/#/c/597676/ *python-keystoneclient* - master: https://review.openstack.org/#/c/597671/ - *stable/rocky*: https://review.openstack.org/#/c/597696/ - *stable/queens*: https://review.openstack.org/#/c/597691/ - *stable/pike*: https://review.openstack.org/#/c/597685/ - *stable/ocata*: https://review.openstack.org/#/c/597679/ Hopefully this helps organize things a bit. I was losing my mind maintaining a mental map. Let me know if you see anything odd about the above. Otherwise feel free to give those a review. Thanks, Lance On Fri, Sep 7, 2018 at 2:39 AM Tony Breeds wrote: > On Thu, Sep 06, 2018 at 03:01:01PM -0500, Lance Bragstad wrote: > > I'm noticing some odd cases with respect to the python 3 community goal > > [0]. So far my findings are specific to keystone repositories, but I can > > imagine this affecting other projects. > > > > Doug generated the python 3 reviews for keystone repositories, including > > the ones for stable branches. We noticed some issues with the ones > proposed > > to stable (keystoneauth, python-keystoneclient) and master > > (keystonemiddleware). For example, python-keystoneclient's stable/pike > [1] > > and stable/ocata [2] branches are both failing with something like [3]: > > > > ERROR: You must give at least one requirement to install (see "pip help > > install") > > I've updated 1 and 2 to do the same thing that lots of other repos do > and just exit 0 in this case. 1 and 2 now have a +1 from zuul. > > > > > I've attempted to remove tox_install.sh using several approaches with > > keystonemiddleware master [7]. None of which passed both unit tests and > the > > requirements check. > > Doug pointed out the fix here, which I added. It passed most of the > gate but failed in an unrelated neutron test so I've rechecked it. > > Yours Tony. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s at cassiba.com Fri Sep 7 16:19:05 2018 From: s at cassiba.com (Samuel Cassiba) Date: Fri, 7 Sep 2018 09:19:05 -0700 Subject: [openstack-dev] [election] [tc] TC candidacy In-Reply-To: References: <584d6c65-7b13-e0d1-842a-ebb9f7fb6290@gmail.com> Message-ID: On Fri, Sep 7, 2018 at 8:55 AM, Samuel Cassiba wrote: > On Fri, Sep 7, 2018 at 6:22 AM, Matt Riedemann wrote: >> On 9/5/2018 2:49 PM, Samuel Cassiba wrote: >>> >>> Though my hands-on experience goes back several releases, I still view >>> things from the outside-looking-in perspective. Having the outsider >>> lens is crucial in the long-term for any consensus-driven group, >>> regardless of that consensus. >>> >>> Regardless of the election outcome, this is me taking steps to having a >>> larger involvement in the overall conversations that drive so much of >>> our daily lives. At the end of the day, we're all just groups of people >>> trying to do our jobs. I view this as an opportunity to give back to a >>> community that has given me so much. >> >> >> Are there specific initiatives you plan on pushing forward if on the TC? I'm >> thinking about stuff from the laundry list here: >> >> https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Other_Initiatives >> > > Excellent question! > > It's not in my nature to push specific agendas. That said, being in > the deploy space, constellations is something that does have a > specific gravity that would, no doubt, draw me in, whether or not I am > part of the TC. I've viewed projects in the deploy space, such aq > > Furthering the adoption of secret management is another thing that > hits close to home ...and that would be where an unintended keyboard-seeking Odin attack preemptively initiates a half-thought thought. It's hard to get upset at this face, though. https://i.imgur.com/c7tktmO.jpg To that point, projects like Chef have made use of encrypted secrets since more or less the dawn of time, but not at all in a portable way. Continuing the work to bring secrets under a single focus is something that I would also be a part of, with or without being on the TC. In both of these efforts, I envision having some manner of involvement no matter what. At the strategic level, working to ensure the disparate efforts are in alignment is where I would gravitate to. Best, Samuel From openstack at nemebean.com Fri Sep 7 16:32:25 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 7 Sep 2018 11:32:25 -0500 Subject: [openstack-dev] [election] [tc] TC candidacy In-Reply-To: References: <584d6c65-7b13-e0d1-842a-ebb9f7fb6290@gmail.com> Message-ID: <64e56b66-3f8f-9e4c-1a7f-a3f15a29f255@nemebean.com> On 09/07/2018 11:19 AM, Samuel Cassiba wrote: > On Fri, Sep 7, 2018 at 8:55 AM, Samuel Cassiba wrote: >> On Fri, Sep 7, 2018 at 6:22 AM, Matt Riedemann wrote: >>> On 9/5/2018 2:49 PM, Samuel Cassiba wrote: >>>> >>>> Though my hands-on experience goes back several releases, I still view >>>> things from the outside-looking-in perspective. Having the outsider >>>> lens is crucial in the long-term for any consensus-driven group, >>>> regardless of that consensus. >>>> >>>> Regardless of the election outcome, this is me taking steps to having a >>>> larger involvement in the overall conversations that drive so much of >>>> our daily lives. At the end of the day, we're all just groups of people >>>> trying to do our jobs. I view this as an opportunity to give back to a >>>> community that has given me so much. >>> >>> >>> Are there specific initiatives you plan on pushing forward if on the TC? I'm >>> thinking about stuff from the laundry list here: >>> >>> https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Other_Initiatives >>> >> >> Excellent question! >> >> It's not in my nature to push specific agendas. That said, being in >> the deploy space, constellations is something that does have a >> specific gravity that would, no doubt, draw me in, whether or not I am >> part of the TC. I've viewed projects in the deploy space, such aq >> >> Furthering the adoption of secret management is another thing that >> hits close to home > > ...and that would be where an unintended keyboard-seeking Odin attack > preemptively initiates a half-thought thought. It's hard to get upset > at this face, though. https://i.imgur.com/c7tktmO.jpg > > To that point, projects like Chef have made use of encrypted secrets > since more or less the dawn of time, but not at all in a portable way. > Continuing the work to bring secrets under a single focus is something > that I would also be a part of, with or without being on the TC. Just want to note that there is work underway in Oslo to address this. The base framework for it merged in Rocky and we plan to have integration with Castellan in Stein. https://review.openstack.org/#/c/474304/ > > In both of these efforts, I envision having some manner of involvement > no matter what. At the strategic level, working to ensure the > disparate efforts are in alignment is where I would gravitate to. > > Best, > Samuel > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From rico.lin.guanyu at gmail.com Fri Sep 7 17:17:12 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Sat, 8 Sep 2018 01:17:12 +0800 Subject: [openstack-dev] [election][tc] Announcing candidacy In-Reply-To: <8cd4c3ac-85c6-cac1-c1db-678b57634447@gmail.com> References: <8cd4c3ac-85c6-cac1-c1db-678b57634447@gmail.com> Message-ID: On Fri, Sep 7, 2018 at 9:21 PM Matt Riedemann wrote: > On 9/6/2018 1:49 PM, Rico Lin wrote: > > * Cross-community integrations (K8s, CloudFoundry, Ceph, OPNFV) > > Are there some specific initiatives or deliverables you have in mind > here, or just general open communication channels? It's very hard to > gauge any kind of progress/success on the latter. > I'm refering on clear communication channels and go from Use cases to real development tasks (As I try to explain in the last section of my candidacy). And here's some specific initiatives or deliverables sample I got in mind. * From StarlingX, some great improvement for Edge cases are delivered to projects. And there're also communications cross StarlingX and TCs on how to make it integrated with rest OpenStack projects (currently StarlingX still using it's own forks of OpenStack projects). And there're other projects that other organizations contribute to OpenStack or form another communities that depend on OpenStack. * We recently create a new repo `openstack-service-broker` [1]. Use Service Broker (A project from CloudFoundry) expose external resources to applications running in a PaaS. Which is exactly a integration cross CloudFoundry and OpenStack (protentially with K8s too) base on specific scenario. * K8s as one of the most popular case here, I believe we already can see some nice integration cross OpenStack and K8s. Include Manila, Keystone support in K8s, Magnum become one of official deployment tool in K8s community. Also I'm currently working on Integrate Heat AutoScaling to K8s cluster autoscaler as well [2]. * OPNFV integrated with OpenStack as it's cluster provider. So the goal here IMO is `how can we properly set up cross communication and improve scenarios with use cases or help these scenarios to become deliverable for user?`. SIGs are one of the format that I believe can help to accelerate this goal. As I mentioned in [3] and in goal `Strong the structure of SIGs`. We should consider to allow SIGs to become that platform from use cases and scenario to a trackable development tasks. I know there's nothing block a SIG to do so, but there's also no guideline, structure format, or other resources to make the path easier for SIG. Hope these explains wht the goal is in my mind. [1] https://github.com/openstack/openstack-service-broker [2] https://github.com/kubernetes/autoscaler/pull/1226 [3] http://lists.openstack.org/pipermail/openstack-sigs/2018-August/000453.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Sep 7 18:34:23 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 7 Sep 2018 13:34:23 -0500 Subject: [openstack-dev] [nova][cinder] about unified limits In-Reply-To: References: Message-ID: That would be great! I can break down the work a little bit to help describe where we are at with different parts of the initiative. Hopefully it will be useful for your colleagues in case they haven't been closely following the effort. # keystone Based on the initial note in this thread, I'm sure you're aware of keystone's status with respect to unified limits. But to recap, the initial implementation landed in Queens and targeted flat enforcement [0]. During the Rocky PTG we sat down with other services and a few operators to explain the current status in keystone and if either developers or operators had feedback on the API specifically. Notes were captured in etherpad [1]. We spent the Rocky cycle fixing usability issues with the API [2] and implementing support for a hierarchical enforcement model [3]. At this point keystone is ready for services to start consuming the unified limits work. The unified limits API is still marked as stable and it will likely stay that way until we have at least one project using unified limits. We can use that as an opportunity to do a final flush of any changes that need to be made to the API before fully supporting it. The keystone team expects that to be a quick transition, as we don't want to keep the API hanging in an experimental state. It's really just a safe guard to make sure we have the opportunity to use it in another service before fully committing to the API. Ultimately, we don't want to prematurely mark the API as supported when other services aren't even using it yet, and then realize it has issues that could have been fixed prior to the adoption phase. # oslo.limit In parallel with the keystone work, we created a new library to aid services in consuming limits. Currently, the sole purpose of oslo.limit is to abstract project and project hierarchy information away from the service, so that services don't have to reimplement client code to understand project trees, which could arguably become complex and lead to inconsistencies in u-x across services. Ideally, a service should be able to pass some relatively basic information to oslo.limit and expect an answer on whether or not usage for that claim is valid. For example, here is a project ID, resource name, and resource quantity, tell me if this project is over it's associated limit or default limit. We're currently working on implementing the enforcement bits of oslo.limit, which requires making API calls to keystone in order to retrieve the deployed enforcement model, limit information, and project hierarchies. Then it needs to reason about those things and calculate usage from the service in order to determine if the request claim is valid or not. There are patches up for this work, and reviews are always welcome [4]. Note that we haven't released oslo.limit yet, but once the basic enforcement described above is implemented we will. Then services can officially pull it into their code as a dependency and we can work out remaining bugs in both keystone and oslo.limit. Once we're confident in both the API and the library, we'll bump oslo.limit to version 1.0 at the same time we graduate the unified limits API from "experimental" to "supported". Note that oslo libraries <1.0 are considered experimental, which fits nicely with the unified limit API being experimental as we shake out usability issues in both pieces of software. # services Finally, we'll be in a position to start integrating oslo.limit into services. I imagine this to be a coordinated effort between keystone, oslo, and service developers. I do have a patch up that adds a conceptual overview for developers consuming oslo.limit [5], which renders into [6]. To be honest, this is going to be a very large piece of work and it's going to require a lot of communication. In my opinion, I think we can use the first couple iterations to generate some well-written usage documentation. Any questions coming from developers in this phase should probably be answered in documentation if we want to enable folks to pick this up and run with it. Otherwise, I could see the handful of people pushing the effort becoming a bottle neck in adoption. Hopefully this helps paint the landscape of where things are currently with respect to each piece. As always, let me know if you have any additional questions. If people want to discuss online, you can find me, and other contributors familiar with this topic, in #openstack-keystone or #openstack-dev on IRC (nic: lbragstad). [0] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html [1] https://etherpad.openstack.org/p/unified-limits-rocky-ptg [2] https://tinyurl.com/y6ucarwm [3] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/strict-two-level-enforcement-model.html [4] https://review.openstack.org/#/q/project:openstack/oslo.limit+status:open [5] https://review.openstack.org/#/c/600265/ [6] http://logs.openstack.org/65/600265/3/check/openstack-tox-docs/a6bcf38/html/user/usage.html On Thu, Sep 6, 2018 at 8:56 PM Jaze Lee wrote: > Lance Bragstad 于2018年9月6日周四 下午10:01写道: > > > > I wish there was a better answer for this question, but currently there > are only a handful of us working on the initiative. If you, or someone you > know, is interested in getting involved, I'll happily help onboard people. > > Well,I can recommend some my colleges to work on this. I wish in S, > all service can use unified limits to do quota job. > > > > > On Wed, Sep 5, 2018 at 8:52 PM Jaze Lee wrote: > >> > >> On Stein only one service? > >> Is there some methods to move this more fast? > >> Lance Bragstad 于2018年9月5日周三 下午9:29写道: > >> > > >> > Not yet. Keystone worked through a bunch of usability improvements > with the unified limits API last release and created the oslo.limit > library. We have a patch or two left to land in oslo.limit before projects > can really start using unified limits [0]. > >> > > >> > We're hoping to get this working with at least one resource in > another service (nova, cinder, etc...) in Stein. > >> > > >> > [0] > https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init > >> > > >> > On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee wrote: > >> >> > >> >> Hello, > >> >> Does nova and cinder use keystone's unified limits api to do > quota job? > >> >> If not, is there a plan to do this? > >> >> Thanks a lot. > >> >> > >> >> -- > >> >> 谦谦君子 > >> >> > >> >> > __________________________________________________________________________ > >> >> OpenStack Development Mailing List (not for usage questions) > >> >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >> > > __________________________________________________________________________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> > >> -- > >> 谦谦君子 > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > 谦谦君子 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Fri Sep 7 18:43:55 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 7 Sep 2018 13:43:55 -0500 Subject: [openstack-dev] [nova][cinder] about unified limits In-Reply-To: References: Message-ID: <808519e4-8b10-26a1-e069-f67a1aafd2f1@nemebean.com> I will also note that I had an oslo.limit topic on the Oslo PTG schedule: https://etherpad.openstack.org/p/oslo-stein-ptg-planning I don't know whether anybody from Jaze's team will be there, but if so that would be a good opportunity for some face-to-face discussion. I didn't give it a whole lot of time, but I'm open to extending it if that would be helpful. On 09/07/2018 01:34 PM, Lance Bragstad wrote: > That would be great! I can break down the work a little bit to help > describe where we are at with different parts of the initiative. > Hopefully it will be useful for your colleagues in case they haven't > been closely following the effort. > > # keystone > > Based on the initial note in this thread, I'm sure you're aware of > keystone's status with respect to unified limits. But to recap, the > initial implementation landed in Queens and targeted flat enforcement > [0]. During the Rocky PTG we sat down with other services and a few > operators to explain the current status in keystone and if either > developers or operators had feedback on the API specifically. Notes were > captured in etherpad [1]. We spent the Rocky cycle fixing usability > issues with the API [2] and implementing support for a hierarchical > enforcement model [3]. > > At this point keystone is ready for services to start consuming the > unified limits work. The unified limits API is still marked as stable > and it will likely stay that way until we have at least one project > using unified limits. We can use that as an opportunity to do a final > flush of any changes that need to be made to the API before fully > supporting it. The keystone team expects that to be a quick transition, > as we don't want to keep the API hanging in an experimental state. It's > really just a safe guard to make sure we have the opportunity to use it > in another service before fully committing to the API. Ultimately, we > don't want to prematurely mark the API as supported when other services > aren't even using it yet, and then realize it has issues that could have > been fixed prior to the adoption phase. > > # oslo.limit > > In parallel with the keystone work, we created a new library to aid > services in consuming limits. Currently, the sole purpose of oslo.limit > is to abstract project and project hierarchy information away from the > service, so that services don't have to reimplement client code to > understand project trees, which could arguably become complex and lead > to inconsistencies in u-x across services. > > Ideally, a service should be able to pass some relatively basic > information to oslo.limit and expect an answer on whether or not usage > for that claim is valid. For example, here is a project ID, resource > name, and resource quantity, tell me if this project is over it's > associated limit or default limit. > > We're currently working on implementing the enforcement bits of > oslo.limit, which requires making API calls to keystone in order to > retrieve the deployed enforcement model, limit information, and project > hierarchies. Then it needs to reason about those things and calculate > usage from the service in order to determine if the request claim is > valid or not. There are patches up for this work, and reviews are always > welcome [4]. > > Note that we haven't released oslo.limit yet, but once the basic > enforcement described above is implemented we will. Then services can > officially pull it into their code as a dependency and we can work out > remaining bugs in both keystone and oslo.limit. Once we're confident in > both the API and the library, we'll bump oslo.limit to version 1.0 at > the same time we graduate the unified limits API from "experimental" to > "supported". Note that oslo libraries <1.0 are considered experimental, > which fits nicely with the unified limit API being experimental as we > shake out usability issues in both pieces of software. > > # services > > Finally, we'll be in a position to start integrating oslo.limit into > services. I imagine this to be a coordinated effort between keystone, > oslo, and service developers. I do have a patch up that adds a > conceptual overview for developers consuming oslo.limit [5], which > renders into [6]. > > To be honest, this is going to be a very large piece of work and it's > going to require a lot of communication. In my opinion, I think we can > use the first couple iterations to generate some well-written usage > documentation. Any questions coming from developers in this phase should > probably be answered in documentation if we want to enable folks to pick > this up and run with it. Otherwise, I could see the handful of people > pushing the effort becoming a bottle neck in adoption. > > Hopefully this helps paint the landscape of where things are currently > with respect to each piece. As always, let me know if you have any > additional questions. If people want to discuss online, you can find me, > and other contributors familiar with this topic, in #openstack-keystone > or #openstack-dev on IRC (nic: lbragstad). > > [0] > http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html > [1] https://etherpad.openstack.org/p/unified-limits-rocky-ptg > [2] https://tinyurl.com/y6ucarwm > [3] > http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/strict-two-level-enforcement-model.html > [4] > https://review.openstack.org/#/q/project:openstack/oslo.limit+status:open > [5] https://review.openstack.org/#/c/600265/ > [6] > http://logs.openstack.org/65/600265/3/check/openstack-tox-docs/a6bcf38/html/user/usage.html > > On Thu, Sep 6, 2018 at 8:56 PM Jaze Lee > wrote: > > Lance Bragstad > 于 > 2018年9月6日周四 下午10:01写道: > > > > I wish there was a better answer for this question, but currently > there are only a handful of us working on the initiative. If you, or > someone you know, is interested in getting involved, I'll happily > help onboard people. > > Well,I can recommend some my colleges to work on this. I wish in S, > all service can use unified limits to do quota job. > > > > > On Wed, Sep 5, 2018 at 8:52 PM Jaze Lee > wrote: > >> > >> On Stein only one service? > >> Is there some methods to move this more fast? > >> Lance Bragstad > 于2018年9月5日周三 下午9:29写道: > >> > > >> > Not yet. Keystone worked through a bunch of usability > improvements with the unified limits API last release and created > the oslo.limit library. We have a patch or two left to land in > oslo.limit before projects can really start using unified limits [0]. > >> > > >> > We're hoping to get this working with at least one resource in > another service (nova, cinder, etc...) in Stein. > >> > > >> > [0] > https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init > >> > > >> > On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee > wrote: > >> >> > >> >> Hello, > >> >>     Does nova and cinder  use keystone's unified limits api > to do quota job? > >> >>     If not, is there a plan to do this? > >> >>     Thanks a lot. > >> >> > >> >> -- > >> >> 谦谦君子 > >> >> > >> >> > __________________________________________________________________________ > >> >> OpenStack Development Mailing List (not for usage questions) > >> >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >> > > __________________________________________________________________________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> > >> -- > >> 谦谦君子 > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > 谦谦君子 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mriedemos at gmail.com Fri Sep 7 20:11:20 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 15:11:20 -0500 Subject: [openstack-dev] [election][tc] Announcing candidacy In-Reply-To: References: <8cd4c3ac-85c6-cac1-c1db-678b57634447@gmail.com> Message-ID: On 9/7/2018 12:17 PM, Rico Lin wrote: > I'm refering on clear communication channels and go from Use cases to > real development tasks (As I try to explain in the last section of my > candidacy). Sorry, I totally missed the other details in your candidacy email because they came after your signature. Otherwise I wouldn't have asked. :) > And here's some specific initiatives or deliverables sample I got in mind. > * From StarlingX, some great improvement for Edge cases are delivered to > projects. And there're also communications cross StarlingX and TCs on > how to make it integrated with rest OpenStack projects (currently > StarlingX still using it's own forks of OpenStack projects). And > there're other projects that other organizations contribute to OpenStack > or form another communities that depend on OpenStack. > * We recently create a new repo `openstack-service-broker` [1]. Use > Service Broker (A project from CloudFoundry) expose external resources > to applications running in a PaaS. Which is exactly a integration cross > CloudFoundry and OpenStack (protentially with K8s too) base on specific > scenario. > * K8s as one of the most popular case here, I believe we already can see > some nice integration cross OpenStack and K8s. Include Manila, Keystone > support in K8s, Magnum become one of official deployment tool in K8s > community. Also I'm currently working on Integrate Heat AutoScaling to > K8s cluster autoscaler as well [2]. > * OPNFV integrated with OpenStack as it's cluster provider. Yes this is good detail, thanks Rico. -- Thanks, Matt From sean.mcginnis at gmx.com Fri Sep 7 20:14:10 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 7 Sep 2018 15:14:10 -0500 Subject: [openstack-dev] [release] Release countdown for week R-30 and R-29, September 10-21 Message-ID: <20180907201410.GA20482@sm-workstation> Here we go again! The Stein cycle will be slightly longer than past cycles. In case you haven't seen it yet, please take a look over the schedule for this release: https://releases.openstack.org/stein/schedule.html Development Focus ----------------- Focus should be on optimizing the time at the PTG and following up after the event to jump start Stein development. General Information ------------------- All teams should review their release liaison information and make sure it is up to date [1]. [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons While reviewing liaisons, this would also be a good time to make sure your declared release model matches the project's plans for Stein (e.g. [2]). This should be done prior to the first milestone and can be done by proposing a change to the Stein deliverable file for the project(s) affected [3]. [2] https://github.com/openstack/releases/blob/e0a63f7e896abdf4d66fb3ebeaacf4e17f688c38/deliverables/queens/glance.yaml#L5 [3] http://git.openstack.org/cgit/openstack/releases/tree/deliverables/stein Now would be a good time to start brainstorming Forum topics while some of the PTG discussions are fresh. Just a couple months until the Summit and Forum in Berlin. Upcoming Deadlines & Dates -------------------------- Stein-1 milestone: October 25 (R-24 week) Forum at OpenStack Summit in Berlin: November 13-15 -- Sean McGinnis (smcginnis) From jimmy at openstack.org Fri Sep 7 20:32:48 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 07 Sep 2018 15:32:48 -0500 Subject: [openstack-dev] [ptls] [user survey] User Survey Privacy Message-ID: <5B92E070.1000201@openstack.org> Hi PTLs! A recent question came up regarding public sharing of the Project-Specific feedback questions on the OpenStack User Survey. The short answer is... this is a great idea! This information is meant to help projects improve and the information is not meant to be kept secret. Oddly enough, nobody asked before lbragstad, so thanks for asking! The long answer... I would like to add a little bit of background on the user survey and how we treat the data. Part of the agreement we make with users that fill out the User Survey is we will keep their data anonymized. As a result, when we publish data on the website[1] we ensure the user can see data from no fewer than 10 companies at a time. Additionally, the User Committee, who helps with the data analysis, sign an NDA before reviewing any data, which helps to preserve user privacy. All that said, the questions for PTLs are framed as "Project Feedback", so the expectation and hope is that PTLs will not only use it to improve their projects, but will also share it amongst other relevant projects. As excited as we are to have you share this data with the community, we do want to make sure there is nothing that would reveal the identity of the survey taker. We've already vetted the English content, but we are still waiting on translations to finish up. So, if you decide to share the data publicly, please only share the English content for the time being. Feel free to reference this email or hit us up on the user-committee at lists.openstack.org Beyond that, we encourage you to follow in Keystone's footsteps and share this feedback with the mailing list, at the PTG, or even with a buddy. We hope it's valuable to your project and the community at large! Net: PTLs, please share the project feedback publicly (e.g. on the mailing lists) now (with the above caveats). Cheers, Jimmy [1] https://www.openstack.org/analytics -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Sep 7 20:43:25 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 7 Sep 2018 15:43:25 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 3 September 2018 Message-ID: # Keystone Team Update - Week of 3 September 2018 ## News This week was mainly focused on the python3 community goal and ultimately cleaning up a bunch of issues with stable branches that were uncovered in those reviews. Next week is the PTG, which the group is preparing for in addition to brainstorming Stein forum topics [0][1]. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134362.html [1] https://etherpad.openstack.org/p/BER-keystone-forum-sessions ## User Feedback The foundation provided us with the latest feedback from our users [0]. A sanitized version of that data has been shared publicly [1] for you to checkout prior to the PTG. We have time set aside on Wednesday to review the feedback and discuss any adjustments we want to make to the survey questions. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134434.html [1] https://docs.google.com/spreadsheets/d/1wz-GOoFODGWrFuGqVWDunEWsuhC_lvRJLrfUybTj69Q/edit?usp=sharing ## PTG Planning As I'm sure you're aware, the PTG is next week. The schedule is relatively firm at this point [0], but please raise any conflicts with other sessions if you see any. [0] https://etherpad.openstack.org/p/keystone-stein-ptg ## Open Specs Search query: https://bit.ly/2Pi6dGj A new specification was proposed this week to enable limit support for domains [0]. This is going to be a main focus next week as we discuss unified limits. Please have a look if you're interested in that particular discussion. [0] https://review.openstack.org/#/c/599491/ ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 26 changes this week, most of which were for the python3 community goal [0]. We did notice a high number of stable branch failures for keystoneauth, keystonemiddleware, and python-keystoneclient. This was discussed on the ML[1][2]. [0] https://governance.openstack.org/tc/goals/stein/python3-first.html [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134391.html [2] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134454.html ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 58 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. [0] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bug/1776504 ## Bugs This week we opened 9 new bugs, closed 1, and fixed 3. Bugs opened (9) - Bug #1790148 (keystone:Low) opened by FreudianSlip https://bugs.launchpad.net/keystone/+bug/1790148 - Bug #1790428 (keystone:Undecided) opened by Eric Miller https://bugs.launchpad.net/keystone/+bug/1790428 - Bug #1791111 (keystone:Undecided) opened by Paul Peereboom https://bugs.launchpad.net/keystone/+bug/1791111 - Bug #1780164 (keystoneauth:Undecided) opened by mchlumsky https://bugs.launchpad.net/keystoneauth/+bug/1780164 - Bug #1790423 (python-keystoneclient:Undecided) opened by ChenWu https://bugs.launchpad.net/python-keystoneclient/+bug/1790423 - Bug #1790931 (oslo.limit:Medium) opened by Lance Bragstad https://bugs.launchpad.net/oslo.limit/+bug/1790931 - Bug #1790954 (oslo.limit:Medium) opened by Lance Bragstad https://bugs.launchpad.net/oslo.limit/+bug/1790954 - Bug #1790894 (oslo.limit:Low) opened by Lance Bragstad https://bugs.launchpad.net/oslo.limit/+bug/1790894 - Bug #1790935 (oslo.limit:Low) opened by Lance Bragstad https://bugs.launchpad.net/oslo.limit/+bug/1790935 Bugs closed (1) - Bug #1790423 (python-keystoneclient:Undecided) https://bugs.launchpad.net/python-keystoneclient/+bug/1790423 Bugs fixed (3) - Bug #1777671 (keystone:Medium) fixed by Vishakha Agarwal https://bugs.launchpad.net/keystone/+bug/1777671 - Bug #1790148 (keystone:Low) fixed by Chason Chan https://bugs.launchpad.net/keystone/+bug/1790148 - Bug #1789351 (keystonemiddleware:Undecided) fixed by wangxiyuan https://bugs.launchpad.net/keystonemiddleware/+bug/1789351 ## Milestone Outlook We have a lot of work to do to shape the release between now and milestone 1, which will be October 26th. Focusing on specifications and early feature development is appreciated. https://releases.openstack.org/stein/schedule.html ## Shout-outs Thanks to Ben, Doug, and Tony for helping us make sense of the tox_install.sh and pip stable branch mess! We should be past the last layer of the onion with respect to the python3 stable patches. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Sep 7 21:01:09 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 7 Sep 2018 14:01:09 -0700 Subject: [openstack-dev] [First Contact] PTG Planning & Meeting Cancelled Message-ID: Hello Everyone! I can't believe the PTG is only a few days away! We have our etherpad of discussion topics here[1]. If there is anything else we need to talk about, please add it! We will be meeting Tuesday Morning in Blanca Peak. I don't think we will fill the whole day, so it will be used for StoryBoard meetings in the afternoon. If there are remote people interested in joining, please let me know and I can set something up. Also, if Tuesday morning doesn't work for people perhaps we can schedule a recap for later in the week. Also, the Wednesday meeting will be cancelled because many of us will be meeting in person :) -Kendall (diablo_rojo) [1]https://etherpad.openstack.org/p/FC_SIG_ptg_stein -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Sep 7 21:07:42 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 7 Sep 2018 14:07:42 -0700 Subject: [openstack-dev] [Storyboard] PTG Planning & Upcoming Meeting Cancelled Message-ID: Hello! With the PTG in just a few days, I wanted to give some info and updates so that you are prepared. 1. This coming week's regular meeting on Wednesday will be cancelled. 2. I am planning on booking Blanca Peak for the whole afternoon on Tuesday for discussions. Just waiting for this patch to merge[0]. If we need more time we can schedule something later in the week. See you there! 3. Here [1] is the etherpad that we've been collecting discussion topics into. If there is anything you want to add, feel free. -Kendall (diablo_rojo) [0] https://review.openstack.org/#/c/600665/ [1]https://etherpad.openstack.org/p/sb-stein-ptg-planning -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Fri Sep 7 22:30:53 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Fri, 7 Sep 2018 16:30:53 -0600 Subject: [openstack-dev] [os-upstream-institute] Team lunch at the PTG next week - ACTION NEEDED Message-ID: <71A76D76-D780-41A1-9CB4-C63757F4B90E@gmail.com> Hi Training Team, As a couple of us will be at the PTG next week it would be great to get together one of the days maybe for lunch. Wednesday would work the best for Kendall and me, but we can look into other days as well if it would not work for the majority of people around. So my questions would be: * Are you interested in getting together one of the lunch slots during next week? * Would Wednesday work for you or do you have another preference? Please drop a response to this thread and we will figure it out by Monday or early next week based on the responses. Thanks, Ildikó (IRC: ildikov) From amy at demarco.com Fri Sep 7 23:18:08 2018 From: amy at demarco.com (Amy Marrich) Date: Fri, 7 Sep 2018 18:18:08 -0500 Subject: [openstack-dev] [os-upstream-institute] Team lunch at the PTG next week - ACTION NEEDED In-Reply-To: <71A76D76-D780-41A1-9CB4-C63757F4B90E@gmail.com> References: <71A76D76-D780-41A1-9CB4-C63757F4B90E@gmail.com> Message-ID: I'm game! Amy (spotz) On Fri, Sep 7, 2018 at 5:30 PM, Ildiko Vancsa wrote: > Hi Training Team, > > As a couple of us will be at the PTG next week it would be great to get > together one of the days maybe for lunch. > > Wednesday would work the best for Kendall and me, but we can look into > other days as well if it would not work for the majority of people around. > > So my questions would be: > > * Are you interested in getting together one of the lunch slots during > next week? > > * Would Wednesday work for you or do you have another preference? > > Please drop a response to this thread and we will figure it out by Monday > or early next week based on the responses. > > Thanks, > Ildikó > (IRC: ildikov) > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Sep 7 23:28:06 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 18:28:06 -0500 Subject: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests In-Reply-To: References: <11be89ad-a59a-1fe6-5c7b-badb4a06e643@fried.cc> <1b586dfd-594f-3f44-b6f3-8b232aa0ab5b@fried.cc> <165b2ae50f1.11ea66ed588256.3160647352194636247@ghanshyammann.com> Message-ID: <12a19284-f459-c6a0-03cc-b300f04db777@gmail.com> On 9/7/2018 10:25 AM, William M Edmonds wrote: > The concern that I have with whitelisting in a given CI is that it has > to be done independently in every compute driver CI. So while I agree > that it won't be easy to maintain tagging for compute driver on the > tempest side, at least that's one place / easier than doing it in every > driver CI. When anyone figures out that an change is needed, all of the > CIs would benefit together if there is a shared solution. How about storing the compute-driver specific whitelist in a common location? I'm not sure if that would be tempest, nova or somewhere else. -- Thanks, Matt From manjeet.s.bhatia at intel.com Sat Sep 8 00:32:42 2018 From: manjeet.s.bhatia at intel.com (Bhatia, Manjeet S) Date: Sat, 8 Sep 2018 00:32:42 +0000 Subject: [openstack-dev] [neutron] [router] status bug Message-ID: Hi neutrinos, I was looking at Bug [1], and noticed that router status stays ACTIVE even after -disable ? - openstack router -set disable router1 /opt/stack/neutron$ openstack router set --disable routerr1 vagrant at allinone:/opt/stack/neutron$ neutron router-show router1 neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. +-------------------------+--------------------------------------+ | Field | Value | +-------------------------+--------------------------------------+ | admin_state_up | False | | availability_zone_hints | | | availability_zones | | | created_at | 2018-09-08T00:30:46Z | | description | | | distributed | True | | external_gateway_info | | | flavor_id | | | ha | False | | id | 6f88b5f4-dc94-44bd-89cd-9c0f2b374f79 | | name | router1 | | project_id | 05d72b0eff534ccf81e37b5d6e3402f6 | | revision_number | 1 | | routes | | | status | ACTIVE | | tags | | | tenant_id | 05d72b0eff534ccf81e37b5d6e3402f6 | | updated_at | 2018-09-08T00:31:18Z | Shouldn't it update the status to DOWN ? before I open a ticket for bug I just wanted to confirm it ? [1]. https://bugs.launchpad.net/neutron/+bug/1789434 Regards ! Manjeet Singh Bhatia -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Sat Sep 8 00:54:33 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sat, 8 Sep 2018 09:54:33 +0900 Subject: [openstack-dev] [release] Release countdown for week R-30 and R-29, September 10-21 In-Reply-To: <20180907201410.GA20482@sm-workstation> References: <20180907201410.GA20482@sm-workstation> Message-ID: Hi, Thanks for the summary. I just added Searchlight to the Stein deliverable [1]. One concern is we moved our projects to Storyboard last week, do I have to change the project file to reflect that and how? Thanks, [1] https://review.openstack.org/#/c/600889/ *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Sat, Sep 8, 2018 at 5:14 AM Sean McGinnis wrote: > Here we go again! The Stein cycle will be slightly longer than past > cycles. In > case you haven't seen it yet, please take a look over the schedule for this > release: > > https://releases.openstack.org/stein/schedule.html > > Development Focus > ----------------- > > Focus should be on optimizing the time at the PTG and following up after > the > event to jump start Stein development. > > General Information > ------------------- > > All teams should review their release liaison information and make sure it > is > up to date [1]. > > [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons > > While reviewing liaisons, this would also be a good time to make sure your > declared release model matches the project's plans for Stein (e.g. [2]). > This > should be done prior to the first milestone and can be done by proposing a > change to the Stein deliverable file for the project(s) affected [3]. > > [2] > https://github.com/openstack/releases/blob/e0a63f7e896abdf4d66fb3ebeaacf4e17f688c38/deliverables/queens/glance.yaml#L5 > [3] > http://git.openstack.org/cgit/openstack/releases/tree/deliverables/stein > > Now would be a good time to start brainstorming Forum topics while some of > the > PTG discussions are fresh. Just a couple months until the Summit and Forum > in > Berlin. > > Upcoming Deadlines & Dates > -------------------------- > > Stein-1 milestone: October 25 (R-24 week) > Forum at OpenStack Summit in Berlin: November 13-15 > > -- > Sean McGinnis (smcginnis) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Sat Sep 8 01:10:37 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sat, 8 Sep 2018 10:10:37 +0900 Subject: [openstack-dev] [Storyboard][Searchlight] Where can I find the project ID on Storyboard? Message-ID: Hi Storyboard team, I'm adding Searchlight projects to the Stein deliverables with the storyboard attribute. Where can I find the Searchlight projects ID? Right now I can only see the project links. Thanks, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Sep 8 02:02:03 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 8 Sep 2018 02:02:03 +0000 Subject: [openstack-dev] [Storyboard][Searchlight] Where can I find the project ID on Storyboard? In-Reply-To: References: Message-ID: <20180908020203.o6zbf7g5uqocul4l@yuggoth.org> On 2018-09-08 10:10:37 +0900 (+0900), Trinh Nguyen wrote: > I'm adding Searchlight projects to the Stein deliverables with the > storyboard attribute. Where can I find the Searchlight projects ID? Right > now I can only see the project links. It can be looked up from the API like so: https://storyboard.openstack.org/api/v1/projects/openstack/searchlight However I agree expecting users to do this isn't particularly friendly. Since this was in service of filling out release management details, I've pushed https://review.openstack.org/600893 for review to support optionally using the project name now that SB supports querying with it and uses it by default in webclient URLs. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sam47priya at gmail.com Sat Sep 8 03:04:04 2018 From: sam47priya at gmail.com (Sam P) Date: Sat, 8 Sep 2018 12:04:04 +0900 Subject: [openstack-dev] [masakari] No meeting on next week (9/10) Message-ID: Hi All, Since most of us in Denver for PTG, there will be not meeting on 9/10. --- Regards, Sampath -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Sat Sep 8 05:21:38 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sat, 8 Sep 2018 15:21:38 +1000 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][Telemetry] all requirement updates are currently blocked In-Reply-To: References: <20180907041847.GF31148@thor.bakeyournoodle.com> Message-ID: <20180908052138.GB16495@thor.bakeyournoodle.com> On Fri, Sep 07, 2018 at 11:09:15AM +0200, Julien Danjou wrote: > You can, I've already said +1 on a review a few weeks ago. :) Oh great. I'll dig that up and push forward with that side of things if you don't mind. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From kevinzs2048 at gmail.com Sat Sep 8 06:53:06 2018 From: kevinzs2048 at gmail.com (Shuai Zhao) Date: Sat, 8 Sep 2018 14:53:06 +0800 Subject: [openstack-dev] [kuryr] Some questions about kuryr In-Reply-To: References: Message-ID: Hi Kuryr, We still have the issue on this. any update on the question will be really appreciated. Thanks~ On Mon, Sep 3, 2018 at 1:56 PM Shuai Zhao wrote: > Hi daniel, > > As we know, there are two ways to deploy network for Pod-in-VM in > openstack through kuryr, macvlan and trunk. > > Why do we just create ports in neutron and attach this port to VM then we > can easily use eth* in VM to deploy network for Pod-in-VM? > > And if we use macvlan mode when VMs are running on overlay network, how > could we resolve the l2-population? > > Best wishes to you ! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Sat Sep 8 14:23:48 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sat, 8 Sep 2018 09:23:48 -0500 Subject: [openstack-dev] [release] Release countdown for week R-30 and R-29, September 10-21 In-Reply-To: References: <20180907201410.GA20482@sm-workstation> Message-ID: <20180908142348.GA22219@sm-workstation> On Sat, Sep 08, 2018 at 09:54:33AM +0900, Trinh Nguyen wrote: > Hi, > > Thanks for the summary. > > I just added Searchlight to the Stein deliverable [1]. One concern is we > moved our projects to Storyboard last week, do I have to change the project > file to reflect that and how? > > Thanks, > > [1] https://review.openstack.org/#/c/600889/ > Hey Trinh, The deliverable should be switched over to reflect the change to use storyboard. Here is an example of a patch that did that for another deliverable: https://review.openstack.org/#/c/553900/1/deliverables/_independent/reno.yaml So once you get the storyboard ID, you just swap out the "launchpad:" line for a "storyboard:" one. Jeremy's idea of supporting the names from the other thread seems like a good idea to me, so I will have to take a look at that proposal. Sean From dangtrinhnt at gmail.com Sat Sep 8 15:33:36 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sun, 9 Sep 2018 00:33:36 +0900 Subject: [openstack-dev] [release] Release countdown for week R-30 and R-29, September 10-21 In-Reply-To: <20180908142348.GA22219@sm-workstation> References: <20180907201410.GA20482@sm-workstation> <20180908142348.GA22219@sm-workstation> Message-ID: Thank Sean. *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Sat, Sep 8, 2018 at 11:24 PM Sean McGinnis wrote: > On Sat, Sep 08, 2018 at 09:54:33AM +0900, Trinh Nguyen wrote: > > Hi, > > > > Thanks for the summary. > > > > I just added Searchlight to the Stein deliverable [1]. One concern is we > > moved our projects to Storyboard last week, do I have to change the > project > > file to reflect that and how? > > > > Thanks, > > > > [1] https://review.openstack.org/#/c/600889/ > > > > Hey Trinh, > > The deliverable should be switched over to reflect the change to use > storyboard. Here is an example of a patch that did that for another > deliverable: > > > https://review.openstack.org/#/c/553900/1/deliverables/_independent/reno.yaml > > So once you get the storyboard ID, you just swap out the "launchpad:" line > for > a "storyboard:" one. > > Jeremy's idea of supporting the names from the other thread seems like a > good > idea to me, so I will have to take a look at that proposal. > > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Sat Sep 8 15:33:46 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Sat, 8 Sep 2018 10:33:46 -0500 Subject: [openstack-dev] [os-upstream-institute] Team lunch at the PTG next week - ACTION NEEDED In-Reply-To: <71A76D76-D780-41A1-9CB4-C63757F4B90E@gmail.com> References: <71A76D76-D780-41A1-9CB4-C63757F4B90E@gmail.com> Message-ID: Ildiko, Sounds like a good plan.  I don't think I have other plans for Wednesday so that should work. Look forward to seeing you next week! Jay On 9/7/2018 5:30 PM, Ildiko Vancsa wrote: > Hi Training Team, > > As a couple of us will be at the PTG next week it would be great to get together one of the days maybe for lunch. > > Wednesday would work the best for Kendall and me, but we can look into other days as well if it would not work for the majority of people around. > > So my questions would be: > > * Are you interested in getting together one of the lunch slots during next week? > > * Would Wednesday work for you or do you have another preference? > > Please drop a response to this thread and we will figure it out by Monday or early next week based on the responses. > > Thanks, > Ildikó > (IRC: ildikov) > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jungleboyj at gmail.com Sat Sep 8 15:58:22 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Sat, 8 Sep 2018 10:58:22 -0500 Subject: [openstack-dev] [nova][cinder] about unified limits In-Reply-To: <808519e4-8b10-26a1-e069-f67a1aafd2f1@nemebean.com> References: <808519e4-8b10-26a1-e069-f67a1aafd2f1@nemebean.com> Message-ID: Ben, Ping me when you are planning on having this discussion if you think of it.  Since there is interest in this for Cinder I would like to try to be there. Thanks! Jay On 9/7/2018 1:43 PM, Ben Nemec wrote: > I will also note that I had an oslo.limit topic on the Oslo PTG > schedule: https://etherpad.openstack.org/p/oslo-stein-ptg-planning > > I don't know whether anybody from Jaze's team will be there, but if so > that would be a good opportunity for some face-to-face discussion. I > didn't give it a whole lot of time, but I'm open to extending it if > that would be helpful. > > On 09/07/2018 01:34 PM, Lance Bragstad wrote: >> That would be great! I can break down the work a little bit to help >> describe where we are at with different parts of the initiative. >> Hopefully it will be useful for your colleagues in case they haven't >> been closely following the effort. >> >> # keystone >> >> Based on the initial note in this thread, I'm sure you're aware of >> keystone's status with respect to unified limits. But to recap, the >> initial implementation landed in Queens and targeted flat enforcement >> [0]. During the Rocky PTG we sat down with other services and a few >> operators to explain the current status in keystone and if either >> developers or operators had feedback on the API specifically. Notes >> were captured in etherpad [1]. We spent the Rocky cycle fixing >> usability issues with the API [2] and implementing support for a >> hierarchical enforcement model [3]. >> >> At this point keystone is ready for services to start consuming the >> unified limits work. The unified limits API is still marked as stable >> and it will likely stay that way until we have at least one project >> using unified limits. We can use that as an opportunity to do a final >> flush of any changes that need to be made to the API before fully >> supporting it. The keystone team expects that to be a quick >> transition, as we don't want to keep the API hanging in an >> experimental state. It's really just a safe guard to make sure we >> have the opportunity to use it in another service before fully >> committing to the API. Ultimately, we don't want to prematurely mark >> the API as supported when other services aren't even using it yet, >> and then realize it has issues that could have been fixed prior to >> the adoption phase. >> >> # oslo.limit >> >> In parallel with the keystone work, we created a new library to aid >> services in consuming limits. Currently, the sole purpose of >> oslo.limit is to abstract project and project hierarchy information >> away from the service, so that services don't have to reimplement >> client code to understand project trees, which could arguably become >> complex and lead to inconsistencies in u-x across services. >> >> Ideally, a service should be able to pass some relatively basic >> information to oslo.limit and expect an answer on whether or not >> usage for that claim is valid. For example, here is a project ID, >> resource name, and resource quantity, tell me if this project is over >> it's associated limit or default limit. >> >> We're currently working on implementing the enforcement bits of >> oslo.limit, which requires making API calls to keystone in order to >> retrieve the deployed enforcement model, limit information, and >> project hierarchies. Then it needs to reason about those things and >> calculate usage from the service in order to determine if the request >> claim is valid or not. There are patches up for this work, and >> reviews are always welcome [4]. >> >> Note that we haven't released oslo.limit yet, but once the basic >> enforcement described above is implemented we will. Then services can >> officially pull it into their code as a dependency and we can work >> out remaining bugs in both keystone and oslo.limit. Once we're >> confident in both the API and the library, we'll bump oslo.limit to >> version 1.0 at the same time we graduate the unified limits API from >> "experimental" to "supported". Note that oslo libraries <1.0 are >> considered experimental, which fits nicely with the unified limit API >> being experimental as we shake out usability issues in both pieces of >> software. >> >> # services >> >> Finally, we'll be in a position to start integrating oslo.limit into >> services. I imagine this to be a coordinated effort between keystone, >> oslo, and service developers. I do have a patch up that adds a >> conceptual overview for developers consuming oslo.limit [5], which >> renders into [6]. >> >> To be honest, this is going to be a very large piece of work and it's >> going to require a lot of communication. In my opinion, I think we >> can use the first couple iterations to generate some well-written >> usage documentation. Any questions coming from developers in this >> phase should probably be answered in documentation if we want to >> enable folks to pick this up and run with it. Otherwise, I could see >> the handful of people pushing the effort becoming a bottle neck in >> adoption. >> >> Hopefully this helps paint the landscape of where things are >> currently with respect to each piece. As always, let me know if you >> have any additional questions. If people want to discuss online, you >> can find me, and other contributors familiar with this topic, in >> #openstack-keystone or #openstack-dev on IRC (nic: lbragstad). >> >> [0] >> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html >> [1] https://etherpad.openstack.org/p/unified-limits-rocky-ptg >> [2] https://tinyurl.com/y6ucarwm >> [3] >> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/strict-two-level-enforcement-model.html >> [4] >> https://review.openstack.org/#/q/project:openstack/oslo.limit+status:open >> [5] https://review.openstack.org/#/c/600265/ >> [6] >> http://logs.openstack.org/65/600265/3/check/openstack-tox-docs/a6bcf38/html/user/usage.html >> >> On Thu, Sep 6, 2018 at 8:56 PM Jaze Lee > > wrote: >> >>     Lance Bragstad > 于 >>     2018年9月6日周四 下午10:01写道: >>      > >>      > I wish there was a better answer for this question, but currently >>     there are only a handful of us working on the initiative. If you, or >>     someone you know, is interested in getting involved, I'll happily >>     help onboard people. >> >>     Well,I can recommend some my colleges to work on this. I wish in S, >>     all service can use unified limits to do quota job. >> >>      > >>      > On Wed, Sep 5, 2018 at 8:52 PM Jaze Lee >     > wrote: >>      >> >>      >> On Stein only one service? >>      >> Is there some methods to move this more fast? >>      >> Lance Bragstad >     > 于2018年9月5日周三 下午9:29写道: >>      >> > >>      >> > Not yet. Keystone worked through a bunch of usability >>     improvements with the unified limits API last release and created >>     the oslo.limit library. We have a patch or two left to land in >>     oslo.limit before projects can really start using unified limits >> [0]. >>      >> > >>      >> > We're hoping to get this working with at least one resource in >>     another service (nova, cinder, etc...) in Stein. >>      >> > >>      >> > [0] >> https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init >>      >> > >>      >> > On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee >     > wrote: >>      >> >> >>      >> >> Hello, >>      >> >>     Does nova and cinder  use keystone's unified limits api >>     to do quota job? >>      >> >>     If not, is there a plan to do this? >>      >> >>     Thanks a lot. >>      >> >> >>      >> >> -- >>      >> >> 谦谦君子 >>      >> >> >>      >> >> >> __________________________________________________________________________ >>      >> >> OpenStack Development Mailing List (not for usage questions) >>      >> >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>      >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>      >> > >>      >> > >> __________________________________________________________________________ >>      >> > OpenStack Development Mailing List (not for usage questions) >>      >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>      >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>      >> >>      >> >>      >> >>      >> -- >>      >> 谦谦君子 >>      >> >>      >> >> __________________________________________________________________________ >>      >> OpenStack Development Mailing List (not for usage questions) >>      >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>      >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>      > >>      > >> __________________________________________________________________________ >>      > OpenStack Development Mailing List (not for usage questions) >>      > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>      > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >>     --     谦谦君子 >> >> __________________________________________________________________________ >>     OpenStack Development Mailing List (not for usage questions) >>     Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From miguel at mlavalle.com Sat Sep 8 20:29:52 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sat, 8 Sep 2018 15:29:52 -0500 Subject: [openstack-dev] [os-upstream-institute] Team lunch at the PTG next week - ACTION NEEDED In-Reply-To: <71A76D76-D780-41A1-9CB4-C63757F4B90E@gmail.com> References: <71A76D76-D780-41A1-9CB4-C63757F4B90E@gmail.com> Message-ID: Hi Ildikó, Wednesday lunch is fine with me Regards On Fri, Sep 7, 2018 at 5:30 PM, Ildiko Vancsa wrote: > Hi Training Team, > > As a couple of us will be at the PTG next week it would be great to get > together one of the days maybe for lunch. > > Wednesday would work the best for Kendall and me, but we can look into > other days as well if it would not work for the majority of people around. > > So my questions would be: > > * Are you interested in getting together one of the lunch slots during > next week? > > * Would Wednesday work for you or do you have another preference? > > Please drop a response to this thread and we will figure it out by Monday > or early next week based on the responses. > > Thanks, > Ildikó > (IRC: ildikov) > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sun Sep 9 03:16:27 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sat, 08 Sep 2018 21:16:27 -0600 Subject: [openstack-dev] [goals][python3][qa] Re: starting zuul configuration migration for QA team In-Reply-To: <1658fa0a1fc.af8cf24346696.2925094243926630459@ghanshyammann.com> References: <725D0C93-85E3-4009-9735-8A17BD3D97EF@doughellmann.com> <07E1EA69-635D-4618-BDDC-5869DD3B6FDD@doughellmann.com> <1658fa0a1fc.af8cf24346696.2925094243926630459@ghanshyammann.com> Message-ID: <1536462871-sup-7772@lrrr.local> Excerpts from Ghanshyam Mann's message of 2018-08-31 19:56:33 +0900: > > > > Thanks Doug. QA is ready for this work. Let me know the link and ll review those. Here's the list of patches for the QA team. +--------------------------------------------------------+-------------------------------------+---------+----------+-------------------------------------+---------------+---------------+ | Subject | Repo | Tests | Workflow | URL | Branch | Owner | +--------------------------------------------------------+-------------------------------------+---------+----------+-------------------------------------+---------------+---------------+ | import zuul job settings from project-config | openstack-dev/bashate | UNKNOWN | NEW | https://review.openstack.org/600987 | master | Doug Hellmann | | switch documentation job to new PTI | openstack-dev/bashate | UNKNOWN | NEW | https://review.openstack.org/600988 | master | Doug Hellmann | | add python 3.6 unit test job | openstack-dev/bashate | UNKNOWN | NEW | https://review.openstack.org/600989 | master | Doug Hellmann | | import zuul job settings from project-config | openstack-dev/devstack | UNKNOWN | NEW | https://review.openstack.org/600990 | master | Doug Hellmann | | switch documentation job to new PTI | openstack-dev/devstack | UNKNOWN | NEW | https://review.openstack.org/600991 | master | Doug Hellmann | | import zuul job settings from project-config | openstack-dev/devstack | UNKNOWN | NEW | https://review.openstack.org/601022 | stable/ocata | Doug Hellmann | | import zuul job settings from project-config | openstack-dev/devstack | UNKNOWN | NEW | https://review.openstack.org/601024 | stable/pike | Doug Hellmann | | import zuul job settings from project-config | openstack-dev/devstack | UNKNOWN | NEW | https://review.openstack.org/601026 | stable/queens | Doug Hellmann | | import zuul job settings from project-config | openstack-dev/devstack | UNKNOWN | NEW | https://review.openstack.org/601029 | stable/rocky | Doug Hellmann | | import zuul job settings from project-config | openstack-dev/grenade | UNKNOWN | NEW | https://review.openstack.org/600992 | master | Doug Hellmann | | import zuul job settings from project-config | openstack-dev/grenade | PASS | NEW | https://review.openstack.org/601023 | stable/ocata | Doug Hellmann | | import zuul job settings from project-config | openstack-dev/grenade | UNKNOWN | NEW | https://review.openstack.org/601025 | stable/pike | Doug Hellmann | | import zuul job settings from project-config | openstack-dev/grenade | UNKNOWN | NEW | https://review.openstack.org/601027 | stable/queens | Doug Hellmann | | import zuul job settings from project-config | openstack-dev/grenade | UNKNOWN | NEW | https://review.openstack.org/601030 | stable/rocky | Doug Hellmann | | import zuul job settings from project-config | openstack-dev/hacking | UNKNOWN | NEW | https://review.openstack.org/600993 | master | Doug Hellmann | | switch documentation job to new PTI | openstack-dev/hacking | UNKNOWN | NEW | https://review.openstack.org/600994 | master | Doug Hellmann | | add python 3.6 unit test job | openstack-dev/hacking | UNKNOWN | NEW | https://review.openstack.org/600995 | master | Doug Hellmann | | remove job settings for Quality Assurance repositories | openstack-infra/project-config | UNKNOWN | WIP | https://review.openstack.org/601032 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/coverage2sql | UNKNOWN | NEW | https://review.openstack.org/600984 | master | Doug Hellmann | | switch documentation job to new PTI | openstack/coverage2sql | UNKNOWN | NEW | https://review.openstack.org/600985 | master | Doug Hellmann | | add python 3.6 unit test job | openstack/coverage2sql | UNKNOWN | NEW | https://review.openstack.org/600986 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/devstack-plugin-ceph | UNKNOWN | NEW | https://review.openstack.org/600996 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/devstack-plugin-container | UNKNOWN | REVIEWED | https://review.openstack.org/600997 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/devstack-plugin-container | UNKNOWN | REVIEWED | https://review.openstack.org/601028 | stable/queens | Doug Hellmann | | import zuul job settings from project-config | openstack/devstack-plugin-container | UNKNOWN | REVIEWED | https://review.openstack.org/601031 | stable/rocky | Doug Hellmann | | import zuul job settings from project-config | openstack/devstack-tools | UNKNOWN | NEW | https://review.openstack.org/600998 | master | Doug Hellmann | | add python 3.6 unit test job | openstack/devstack-tools | UNKNOWN | NEW | https://review.openstack.org/600999 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/eslint-config-openstack | UNKNOWN | NEW | https://review.openstack.org/601000 | master | Doug Hellmann | | switch documentation job to new PTI | openstack/eslint-config-openstack | UNKNOWN | NEW | https://review.openstack.org/601001 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/karma-subunit-reporter | UNKNOWN | NEW | https://review.openstack.org/601002 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/openstack-health | UNKNOWN | NEW | https://review.openstack.org/601003 | master | Doug Hellmann | | add python 3.6 unit test job | openstack/openstack-health | UNKNOWN | NEW | https://review.openstack.org/601004 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/os-performance-tools | UNKNOWN | NEW | https://review.openstack.org/601005 | master | Doug Hellmann | | switch documentation job to new PTI | openstack/os-performance-tools | UNKNOWN | NEW | https://review.openstack.org/601006 | master | Doug Hellmann | | add python 3.6 unit test job | openstack/os-performance-tools | UNKNOWN | NEW | https://review.openstack.org/601007 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/os-testr | UNKNOWN | NEW | https://review.openstack.org/601008 | master | Doug Hellmann | | switch documentation job to new PTI | openstack/os-testr | UNKNOWN | NEW | https://review.openstack.org/601009 | master | Doug Hellmann | | add python 3.6 unit test job | openstack/os-testr | UNKNOWN | NEW | https://review.openstack.org/601010 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/patrole | UNKNOWN | NEW | https://review.openstack.org/601011 | master | Doug Hellmann | | switch documentation job to new PTI | openstack/patrole | UNKNOWN | NEW | https://review.openstack.org/601012 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/qa-specs | UNKNOWN | NEW | https://review.openstack.org/601013 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/stackviz | UNKNOWN | NEW | https://review.openstack.org/601014 | master | Doug Hellmann | | switch documentation job to new PTI | openstack/stackviz | UNKNOWN | NEW | https://review.openstack.org/601015 | master | Doug Hellmann | | add python 3.6 unit test job | openstack/stackviz | UNKNOWN | NEW | https://review.openstack.org/601016 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/tempest | UNKNOWN | NEW | https://review.openstack.org/601017 | master | Doug Hellmann | | switch documentation job to new PTI | openstack/tempest | UNKNOWN | NEW | https://review.openstack.org/601018 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/tempest-stress | UNKNOWN | NEW | https://review.openstack.org/601019 | master | Doug Hellmann | | add python 3.5 unit test job | openstack/tempest-stress | UNKNOWN | NEW | https://review.openstack.org/601020 | master | Doug Hellmann | | add python 3.6 unit test job | openstack/tempest-stress | UNKNOWN | NEW | https://review.openstack.org/601021 | master | Doug Hellmann | +--------------------------------------------------------+-------------------------------------+---------+----------+-------------------------------------+---------------+---------------+ From gergely.csatari at nokia.com Sun Sep 9 05:50:47 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Sun, 9 Sep 2018 05:50:47 +0000 Subject: [openstack-dev] [os-upstream-institute] Team lunch at the PTG next week - ACTION NEEDED In-Reply-To: References: <71A76D76-D780-41A1-9CB4-C63757F4B90E@gmail.com> Message-ID: Hi, I’m in. Br, Gerg0 From: Miguel Lavalle Sent: Saturday, September 8, 2018 10:30 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [os-upstream-institute] Team lunch at the PTG next week - ACTION NEEDED Hi Ildikó, Wednesday lunch is fine with me Regards On Fri, Sep 7, 2018 at 5:30 PM, Ildiko Vancsa > wrote: Hi Training Team, As a couple of us will be at the PTG next week it would be great to get together one of the days maybe for lunch. Wednesday would work the best for Kendall and me, but we can look into other days as well if it would not work for the majority of people around. So my questions would be: * Are you interested in getting together one of the lunch slots during next week? * Would Wednesday work for you or do you have another preference? Please drop a response to this thread and we will figure it out by Monday or early next week based on the responses. Thanks, Ildikó (IRC: ildikov) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Sun Sep 9 10:45:34 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sun, 9 Sep 2018 06:45:34 -0400 Subject: [openstack-dev] [openstack-ansible] no meeting this week Message-ID: Hi team: We won't be conducting a meeting this week because of the PTG, however, we'll be making an effort to try and allow/bring remote access to those not at the PTG over this week. Thanks everyone. Regards, Mohammed From mnaser at vexxhost.com Sun Sep 9 15:39:40 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sun, 9 Sep 2018 11:39:40 -0400 Subject: [openstack-dev] [openstack-ansible] ptg schedule Message-ID: Hi team: I've come up with a schedule which includes the general events happening at the PTG which would be interesting for the OSA team and contributors. https://calendar.google.com/calendar?cid=dmV4eGhvc3QuY29tXzgwNmJyb2hpZnNoaGdmY2kzcWdqdDk3aTJzQGdyb3VwLmNhbGVuZGFyLmdvb2dsZS5jb20 Please let me know if you have any difficulty accessing that link. Also, it seems like a very loaded schedule however we're likely going to have extra time here and there so it's very tentative :) Thanks and look forward to seeing everyone! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From doug at doughellmann.com Sun Sep 9 17:21:08 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 09 Sep 2018 11:21:08 -0600 Subject: [openstack-dev] [goals][python3][cinder] starting zuul migration Message-ID: <1536513571-sup-9605@lrrr.local> I apparently missed the email Sean sent a while back saying it was OK to start the migration, but we talked today and so I've submitted the patches to move cinder's zuul settings. Here's the list. Doug +----------------------------------------------+-----------------------------------------+---------------+ | Subject | Repo | Branch | +----------------------------------------------+-----------------------------------------+---------------+ | remove job settings for cinder repositories | openstack-infra/project-config | master | | import zuul job settings from project-config | openstack/cinder | master | | switch documentation job to new PTI | openstack/cinder | master | | add python 3.6 unit test job | openstack/cinder | master | | import zuul job settings from project-config | openstack/cinder | stable/ocata | | import zuul job settings from project-config | openstack/cinder | stable/pike | | import zuul job settings from project-config | openstack/cinder | stable/queens | | import zuul job settings from project-config | openstack/cinder | stable/rocky | | import zuul job settings from project-config | openstack/cinder-specs | master | | import zuul job settings from project-config | openstack/cinder-tempest-plugin | master | | import zuul job settings from project-config | openstack/os-brick | master | | switch documentation job to new PTI | openstack/os-brick | master | | add python 3.6 unit test job | openstack/os-brick | master | | add lib-forward-testing-python3 test job | openstack/os-brick | master | | import zuul job settings from project-config | openstack/os-brick | stable/ocata | | import zuul job settings from project-config | openstack/os-brick | stable/pike | | import zuul job settings from project-config | openstack/os-brick | stable/queens | | import zuul job settings from project-config | openstack/os-brick | stable/rocky | | import zuul job settings from project-config | openstack/python-brick-cinderclient-ext | master | | switch documentation job to new PTI | openstack/python-brick-cinderclient-ext | master | | add python 3.6 unit test job | openstack/python-brick-cinderclient-ext | master | | import zuul job settings from project-config | openstack/python-brick-cinderclient-ext | stable/ocata | | import zuul job settings from project-config | openstack/python-brick-cinderclient-ext | stable/pike | | import zuul job settings from project-config | openstack/python-brick-cinderclient-ext | stable/queens | | import zuul job settings from project-config | openstack/python-brick-cinderclient-ext | stable/rocky | | import zuul job settings from project-config | openstack/python-cinderclient | master | | switch documentation job to new PTI | openstack/python-cinderclient | master | | add python 3.6 unit test job | openstack/python-cinderclient | master | | add lib-forward-testing-python3 test job | openstack/python-cinderclient | master | | import zuul job settings from project-config | openstack/python-cinderclient | stable/ocata | | import zuul job settings from project-config | openstack/python-cinderclient | stable/pike | | import zuul job settings from project-config | openstack/python-cinderclient | stable/queens | | import zuul job settings from project-config | openstack/python-cinderclient | stable/rocky | +----------------------------------------------+-----------------------------------------+---------------+ From tpb at dyncloud.net Sun Sep 9 18:10:03 2018 From: tpb at dyncloud.net (Tom Barron) Date: Sun, 9 Sep 2018 14:10:03 -0400 Subject: [openstack-dev] [manila] initial schedule for PTG Message-ID: <20180909181003.qvx3hrteirehs7fl@barron.net> Manila meets Monday and Tuesday this week in Steamboat [1] from 9am to 5pm UTC-0600. The team came up with a rich set of discussion topics and I've arranged the manila PTG etherpad [2] so that they are all included in a schedule for the two days. We'll need to make adjustments if some topics go fast and others need more time, but please take a preliminary look now and let me know if you see anything that you think will need more time or that is scheduled for a bad time. I tried to start with stuff like backlog, cross-project goals, etc. and Stein deadlines so that we have a framework for decisions about what work we can fit into the Stein cycle. Also, we have remote attendees participating, I think all to the east of Denver, so I tried to shift topics known to be of interest to those folks earlier in the day. -- Tom Barron (tbarron) [1] https://www.openstack.org/assets/ptg/Denver-map.pdf [2] https://etherpad.openstack.org/p/manila-ptg-planning-denver-2018 From gmann at ghanshyammann.com Sun Sep 9 19:11:17 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 10 Sep 2018 04:11:17 +0900 Subject: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests In-Reply-To: <12a19284-f459-c6a0-03cc-b300f04db777@gmail.com> References: <11be89ad-a59a-1fe6-5c7b-badb4a06e643@fried.cc> <1b586dfd-594f-3f44-b6f3-8b232aa0ab5b@fried.cc> <165b2ae50f1.11ea66ed588256.3160647352194636247@ghanshyammann.com> <12a19284-f459-c6a0-03cc-b300f04db777@gmail.com> Message-ID: <165bfbece69.b84e6226124507.1660751732214305006@ghanshyammann.com> ---- On Sat, 08 Sep 2018 08:28:06 +0900 Matt Riedemann wrote ---- > On 9/7/2018 10:25 AM, William M Edmonds wrote: > > The concern that I have with whitelisting in a given CI is that it has > > to be done independently in every compute driver CI. So while I agree > > that it won't be easy to maintain tagging for compute driver on the > > tempest side, at least that's one place / easier than doing it in every > > driver CI. When anyone figures out that an change is needed, all of the > > CIs would benefit together if there is a shared solution. > > How about storing the compute-driver specific whitelist in a common > location? I'm not sure if that would be tempest, nova or somewhere else. Yeah, Tempest would not fit as best location for such tagging or whitelist. I think nova may be better choice if nothing else. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Sun Sep 9 21:20:13 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 10 Sep 2018 06:20:13 +0900 Subject: [openstack-dev] [os-upstream-institute] Team lunch at the PTG next week - ACTION NEEDED In-Reply-To: <71A76D76-D780-41A1-9CB4-C63757F4B90E@gmail.com> References: <71A76D76-D780-41A1-9CB4-C63757F4B90E@gmail.com> Message-ID: <165c034d94f.df9c64a0125121.1658777268767680518@ghanshyammann.com> I am in for Wed lunch meeting. -gmann ---- On Sat, 08 Sep 2018 07:30:53 +0900 Ildiko Vancsa wrote ---- > Hi Training Team, > > As a couple of us will be at the PTG next week it would be great to get together one of the days maybe for lunch. > > Wednesday would work the best for Kendall and me, but we can look into other days as well if it would not work for the majority of people around. > > So my questions would be: > > * Are you interested in getting together one of the lunch slots during next week? > > * Would Wednesday work for you or do you have another preference? > > Please drop a response to this thread and we will figure it out by Monday or early next week based on the responses. > > Thanks, > Ildikó > (IRC: ildikov) > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tony at bakeyournoodle.com Sun Sep 9 23:39:26 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 10 Sep 2018 09:39:26 +1000 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][Telemetry] all requirement updates are currently blocked In-Reply-To: <20180908052138.GB16495@thor.bakeyournoodle.com> References: <20180907041847.GF31148@thor.bakeyournoodle.com> <20180908052138.GB16495@thor.bakeyournoodle.com> Message-ID: <20180909233925.GD16495@thor.bakeyournoodle.com> On Sat, Sep 08, 2018 at 03:21:38PM +1000, Tony Breeds wrote: > On Fri, Sep 07, 2018 at 11:09:15AM +0200, Julien Danjou wrote: > > > You can, I've already said +1 on a review a few weeks ago. :) > > Oh great. I'll dig that up and push forward with that side of things if > you don't mind. It looks like in August this was already setup https://review.openstack.org/#/c/591682/ So releases going forward will be on pypi. Julien, Do you mind me arranging for at least the following versions to be published to pypi? [tony at thor ceilometer]$ for branch in origin/stable/{ocata,pike,queens,rocky} ; do printf "%-25s: %s\n" $branch "$(git describe --abbrev=0 $branch)" ; done origin/stable/ocata : 8.1.5 origin/stable/pike : 9.0.6 origin/stable/queens : 10.0.1 origin/stable/rocky : 11.0.0 Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From knikolla at bu.edu Sun Sep 9 23:45:28 2018 From: knikolla at bu.edu (Kristi Nikolla) Date: Sun, 9 Sep 2018 17:45:28 -0600 Subject: [openstack-dev] [election] [tc] TC Candidacy In-Reply-To: References: Message-ID: <7FE729B4-A9C8-4128-858C-52AC2F745075@bu.edu> Thanks for the question Matt, I have to agree that that is not a good use of time, which is why I’m really interested in the Constellations initiative. There doesn’t have to be a very specifically defined “what OpenStack is” answer, but instead we can focus on multiple answers based on the features we offer and the multitude of problems that we solve. Furthermore, I think that constellations will help weaken the silos between teams by increasing cross-project collaboration and integration towards a common goal. Ultimately, to me, what OpenStack is, first and foremost, is a community. Additionally, I would like to be much more involved with the process of welcoming and mentoring new contributors, and the various groups formed around that. Working in academia, gives me access to a large pool of students, especially during the cloud computing course that we offer in two universities of the Boston area. Many of our previous alums (me included) end up becoming regular contributors and continuing their work in OpenStack and other open source communities. Ultimately, this list isn’t exclusive and I’d love to hear your and other people's opinions about what you think the I should focus on. Kristi Nikolla > On Sep 7, 2018, at 7:26 AM, Matt Riedemann wrote: > > On 9/5/2018 1:20 PM, Kristi Nikolla wrote: >> I’m really excited to have the opportunity to take part in the discussion with >> regards to the technical vision for OpenStack. Regardless of election outcome, >> this is the first step towards a larger involvement from me in the important >> discussions (no more shying away from the important mailing list threads.) > > I'm not trying to pick on you Kristi, but personally I'm tired of the TC vision question that's been going on for years now and would like the people I vote for to spend less time talking about OpenStack and what it is or what it isn't (because that changes based on the person you talk to and on what day you ask them), and spend more time figuring out how to move cross-project initiatives forward. So whether or not OpenStack is a toolkit for private/public/edge clouds, or a product, or something else, there are likely common themes within OpenStack that we can generally agree on across projects and need people to work on them, rather than just talk about doing them. Are there specific cross-project initiatives you are interested in working on? > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From iwienand at redhat.com Mon Sep 10 01:02:06 2018 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 10 Sep 2018 11:02:06 +1000 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][Telemetry] all requirement updates are currently blocked In-Reply-To: <20180909233925.GD16495@thor.bakeyournoodle.com> References: <20180907041847.GF31148@thor.bakeyournoodle.com> <20180908052138.GB16495@thor.bakeyournoodle.com> <20180909233925.GD16495@thor.bakeyournoodle.com> Message-ID: On 09/10/2018 09:39 AM, Tony Breeds wrote: > Julien, Do you mind me arranging for at least the following versions to > be published to pypi? For this particular case, I think our best approach is to have an admin manually upload the tar & wheels from tarballs.openstack.org to pypi. All other options seem to be sub-optimal: - if we re-ran the release pipeline, I *think* it would all be idempotent and the publishing would happen, but there would be confusing duplicate release emails sent. - we could make a special "only-publish" template that avoids notification jobs; switch ceilometer to this, re-run the releases, then switch back. urgh, especially if something goes wrong. - ceilometer could make "no-op" releases on each branch to trigger a fresh release & publish; but releases that essentially do nothing are I imagine an annoyance for users and distributors who track stable branches. It would look like https://test.pypi.org/project/ceilometer/ The pypi hashes will all line up with the .asc files we publish, so we know there's no funny business going on. Thanks, -i From tony at bakeyournoodle.com Mon Sep 10 05:36:00 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 10 Sep 2018 15:36:00 +1000 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][Telemetry] all requirement updates are currently blocked In-Reply-To: References: <20180907041847.GF31148@thor.bakeyournoodle.com> <20180908052138.GB16495@thor.bakeyournoodle.com> <20180909233925.GD16495@thor.bakeyournoodle.com> Message-ID: <20180910053600.GE16495@thor.bakeyournoodle.com> On Mon, Sep 10, 2018 at 11:02:06AM +1000, Ian Wienand wrote: > On 09/10/2018 09:39 AM, Tony Breeds wrote: > > Julien, Do you mind me arranging for at least the following versions to > > be published to pypi? > > For this particular case, I think our best approach is to have an > admin manually upload the tar & wheels from tarballs.openstack.org to > pypi. All other options seem to be sub-optimal: > > - if we re-ran the release pipeline, I *think* it would all be > idempotent and the publishing would happen, but there would be > confusing duplicate release emails sent. fungi also points[1] out that we'd re-sign/publish those artefacts which isn't desirable. So I think we're limited to publishing the existing artefacts (preferred) or waiting/making releases from the open branches. Yours Tony. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-09-10.log.html#t2018-09-10T04:16:37 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Mon Sep 10 05:49:49 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 10 Sep 2018 15:49:49 +1000 Subject: [openstack-dev] [Release-job-failures] Tag of openstack/python-neutronclient failed In-Reply-To: References: Message-ID: <20180910054949.GF16495@thor.bakeyournoodle.com> On Mon, Sep 10, 2018 at 05:13:35AM +0000, zuul at openstack.org wrote: > Build failed. > > - publish-openstack-releasenotes http://logs.openstack.org/c8/c89ca61fdcaf603a10750b289228b7f9a3597290/tag/publish-openstack-releasenotes/fbbd0fa/ : FAILURE in 4m 03s I'm not sure what caused this to fail and my attempts to reproduce it haven't been fruitful :( Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From chris at openstack.org Mon Sep 10 05:53:55 2018 From: chris at openstack.org (Chris Hoge) Date: Sun, 9 Sep 2018 23:53:55 -0600 Subject: [openstack-dev] [k8s] SIG-K8s PTG Meetings, Monday September 10, 2018 Message-ID: SIG-K8s has space reserved in Ballroom A for all of Monday, September 10 at the PTG. We will begin at 9:00 with a planning session, similar to that in Dublin, where we will organize topics and times for the remainder of the day. The planning etherpad can be found here: https://etherpad.openstack.org/p/sig-k8s-2018-denver-ptg The link to the Dublin planning etherpad: https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg Thanks, Chris From dmellado at redhat.com Mon Sep 10 06:21:20 2018 From: dmellado at redhat.com (Daniel Mellado Area) Date: Mon, 10 Sep 2018 08:21:20 +0200 Subject: [openstack-dev] [Kuryr] Meeting cancelled Message-ID: Due to folks being at the Denver PTG (including myself virtually at odd hours x) we'll be cancelling today's weekly meeting. Best! Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From julien at danjou.info Mon Sep 10 06:40:28 2018 From: julien at danjou.info (Julien Danjou) Date: Mon, 10 Sep 2018 08:40:28 +0200 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][Telemetry] all requirement updates are currently blocked In-Reply-To: <20180909233925.GD16495@thor.bakeyournoodle.com> (Tony Breeds's message of "Mon, 10 Sep 2018 09:39:26 +1000") References: <20180907041847.GF31148@thor.bakeyournoodle.com> <20180908052138.GB16495@thor.bakeyournoodle.com> <20180909233925.GD16495@thor.bakeyournoodle.com> Message-ID: On Mon, Sep 10 2018, Tony Breeds wrote: > It looks like in August this was already setup https://review.openstack.org/#/c/591682/ > So releases going forward will be on pypi. > > Julien, Do you mind me arranging for at least the following versions to > be published to pypi? > > [tony at thor ceilometer]$ for branch in origin/stable/{ocata,pike,queens,rocky} ; do printf "%-25s: %s\n" $branch "$(git describe --abbrev=0 $branch)" ; done > origin/stable/ocata : 8.1.5 > origin/stable/pike : 9.0.6 > origin/stable/queens : 10.0.1 > origin/stable/rocky : 11.0.0 Sure, go ahead! -- Julien Danjou ;; Free Software hacker ;; https://julien.danjou.info -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From tony at bakeyournoodle.com Mon Sep 10 06:44:02 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 10 Sep 2018 16:44:02 +1000 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][Telemetry] all requirement updates are currently blocked In-Reply-To: References: <20180907041847.GF31148@thor.bakeyournoodle.com> <20180908052138.GB16495@thor.bakeyournoodle.com> <20180909233925.GD16495@thor.bakeyournoodle.com> Message-ID: <20180910064402.GG16495@thor.bakeyournoodle.com> On Mon, Sep 10, 2018 at 08:40:28AM +0200, Julien Danjou wrote: > On Mon, Sep 10 2018, Tony Breeds wrote: > > > It looks like in August this was already setup https://review.openstack.org/#/c/591682/ > > So releases going forward will be on pypi. > > > > Julien, Do you mind me arranging for at least the following versions to > > be published to pypi? > > > > [tony at thor ceilometer]$ for branch in origin/stable/{ocata,pike,queens,rocky} ; do printf "%-25s: %s\n" $branch "$(git describe --abbrev=0 $branch)" ; done > > origin/stable/ocata : 8.1.5 > > origin/stable/pike : 9.0.6 > > origin/stable/queens : 10.0.1 > > origin/stable/rocky : 11.0.0 > > Sure, go ahead! Thanks! Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From amal.kammoun.2 at gmail.com Mon Sep 10 08:42:37 2018 From: amal.kammoun.2 at gmail.com (amal kammoun) Date: Mon, 10 Sep 2018 10:42:37 +0200 Subject: [openstack-dev] [monasca] Message-ID: Hello, I'm using devstack to have openstack. I installed also monasca using the following link: https://github.com/openstack/monasca-api/tree/master/devstack The problem is that when I log in from horizon, The openstack services and servers are empty on the monitoring tab. What should I do in order to get information about all my openstack servers and services? Thank you! Amal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Mon Sep 10 09:58:49 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Mon, 10 Sep 2018 09:58:49 +0000 Subject: [openstack-dev] [monasca] In-Reply-To: References: Message-ID: <1536573526139.66748@est.fujitsu.com> Hi Amal, The status buttons in the monitoring tab correspond to the alarm states for a given service or node. After installing devstack plugin no alarms are defined yet. You have to do it manually and alarms should appear. Take care Witek ________________________________ From: amal kammoun Sent: Monday, September 10, 2018 10:42 AM To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [monasca] Hello, I'm using devstack to have openstack. I installed also monasca using the following link: https://github.com/openstack/monasca-api/tree/master/devstack The problem is that when I log in from horizon, The openstack services and servers are empty on the monitoring tab. What should I do in order to get information about all my openstack servers and services? Thank you! Amal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Sep 10 10:00:14 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 10 Sep 2018 19:00:14 +0900 Subject: [openstack-dev] [QA][PTG] QA Dinner Night Message-ID: <165c2eca924.f0bde864135262.3309860175009501982@ghanshyammann.com> Hi All, I'd like to propose a QA Dinner night for the QA team at the Dublin PTG. I initiated a doodle vote [1] to choose Tuesday or Wednesday night. NOTE: Anyone engaged in QA activities (not necessary to be QA core) are welcome to join. [1] https://doodle.com/poll/68fudz937v22ghnv -gmann From jean-philippe at evrard.me Mon Sep 10 11:15:02 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Mon, 10 Sep 2018 13:15:02 +0200 Subject: [openstack-dev] =?utf-8?q?=5Belection=5D=5Btc=5D_Opinion_about_?= =?utf-8?q?=27PTL=27_tooling?= Message-ID: <7846-5b965200-7-68cc9800@21942213> Hello everyone, In my candidacy [1], I mentioned that the TC should provide more tools to help the PTLs at their duties, for example to track community health. I have questions for the TC candidates: - What is your opinion about said toolkit? Do you see a purpose for it? - Do you think said toolkit should fall under the TC umbrella? After my discussion with Rico Lin (PTL of the Heat project, and TC candidate) yesterday, I am personally convinced that it would be a good idea, and that we should have those tools: As a PTL (but also any person interested to see health of projects) I wanted it and I am not alone. PTLs are focusing on their duties and, as a day is only composed of so few hours, it is possible they won't have the focus to work on said tools to track, in the longer term, the community. For me, tracking community health (and therefore a toolkit for the PTLs/community) is something TC should cover for good governance, and I am not aware of any tooling extracting metrics that can be easily visible and used by anyone. If each project started to have their own implementation of tools, it would be opposite to one of my other goals, which is the simplification of OpenStack. Thanks for reading me, and do not hesitate to ask me questions on the mailing lists, or in real life during the PTG! Regards, Jean-Philippe Evrard (evrardjp) [1]: https://git.openstack.org/cgit/openstack/election/plain/candidates/stein/TC/jean-philippe at evrard.me From doug at doughellmann.com Mon Sep 10 11:31:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 10 Sep 2018 05:31:11 -0600 Subject: [openstack-dev] [election][tc] Opinion about 'PTL' tooling In-Reply-To: <7846-5b965200-7-68cc9800@21942213> References: <7846-5b965200-7-68cc9800@21942213> Message-ID: <1536579024-sup-8338@lrrr.local> Excerpts from jean-philippe at evrard.me's message of 2018-09-10 13:15:02 +0200: > Hello everyone, > > In my candidacy [1], I mentioned that the TC should provide more tools to help the PTLs at their duties, for example to track community health. > > I have questions for the TC candidates: > - What is your opinion about said toolkit? Do you see a purpose for it? > - Do you think said toolkit should fall under the TC umbrella? > > After my discussion with Rico Lin (PTL of the Heat project, and TC candidate) yesterday, I am personally convinced that it would be a good idea, and that we should have those tools: As a PTL (but also any person interested to see health of projects) I wanted it and I am not alone. PTLs are focusing on their duties and, as a day is only composed of so few hours, it is possible they won't have the focus to work on said tools to track, in the longer term, the community. > > For me, tracking community health (and therefore a toolkit for the PTLs/community) is something TC should cover for good governance, and I am not aware of any tooling extracting metrics that can be easily visible and used by anyone. If each project started to have their own implementation of tools, it would be opposite to one of my other goals, which is the simplification of OpenStack. > > Thanks for reading me, and do not hesitate to ask me questions on the mailing lists, or in real life during the PTG! > > Regards, > Jean-Philippe Evrard (evrardjp) > > [1]: https://git.openstack.org/cgit/openstack/election/plain/candidates/stein/TC/jean-philippe at evrard.me > We've had several different sets of scripts at different times to extract review statistics from gerrit. Is that the sort of thing you mean? What information would you find useful? Doug From adriant at catalyst.net.nz Mon Sep 10 11:44:40 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Mon, 10 Sep 2018 23:44:40 +1200 Subject: [openstack-dev] [keystone][adjutant] Sync with adjutant team Message-ID: <778b1d8d-7366-02a1-0568-77b4c618cbac@catalyst.net.nz> As I'm not attending the PTG, I thought I might help put some words against these questions for when you're having the meeting. Plus even if I did want to be online, it would be something like 4am my time. Stein will likely see a lot of Adjutant refactor work as I get myself back onto the project full swing, and onboard a new dev who will be helping me. As that work happens I'll be in a better place to reevaluate exactly where Adjutant sits around Keystone, so I'll be updating you all as much as I can, but for now have some answers that may help. ## They are building on top of keystone, is there anything we can do to make those interactions easier? Most of the admin level APIs we interact with do the work well enough and the 'primitives' in Keystone work as a base layer for us to build account management logic on top of. I think there was something with querying roles up and down a tree that was awkward, but I think I found ways of doing that which didn't amount to silly numbers of API calls. I think the only real thing I can think of which was awkward is that Keystone doesn't have a concept for quota in regards to how many projects a subtree can have (and how deep that tree can be). In my WIP HMT management code I'm faking it by sticking a key=value in the root project extras blob and doing the checking in my code, and I guess I can potentially switch to using the limits API once that's considered stable, although I don't know how far Keystone itself wants to go down the route of actually implementing quota checking on its own resources. My WIP code has quota checking implemented in Adjutant and because of how it works it likely will stay there as right now that code does quota as: "allowed to create x number of projects across the whole tree within a period of y days, to a depth of z" where x, y, and z have default values in Adjutant or key=values present in the root project extras blob if a project tree needs custom quotas. As for checking quotas, that's done against total project creations across a whole tree based on the number of previous subproject creation tasks in Adjutant in the same tree. ## Is there anything we should incorporate into keystone? maybe after all the scope work lands in Stein? There are indeed places where with some work the Keystone APIs (with proper policy and roles) could provide a replacement for some of the work in Adjutant, and some cases where it just doesn't make sense because the parts of that workflow aren't things Keystone is meant to do (anything with emails or that may need to talk to external systems). The annoying part here though is that while there are definitely things Adjutant can do, or will do that make sense to have in Keystone, other parts of that which a deployer may want to link with external systems or add more workflow logic on top of don't make sense in Keystone. I'm torn because there are a lot of small features I could propose we add to Keystone, but then I doubt I'd be able to use them because I'd need to still keep elements of them pluggable in Adjutant itself, or would still write logic around, at which stage working with the primitives is somewhat easier. What I might do is make a list and see what 'could' work in Keystone, and if I could find a way to still use it in Adjutant. If I can still use it, great, if not, we can evaluated it anyway and consider it as a minor quality of life improvement to Keystone without an Adjutant. That way Keystone potentially gains features that makes sense for it, but if Adjutant is around they can be disabled (or blocked with policy) and Adjutant's more flexible/complex variants can be configured by the deployer. The worry though is that you end up with two cloud variants with slightly similar features, but that's always going to be the case (Keystone + Adjutant vs just Keystone), the question is if adding similar overlapping features may prove too confusing. ## Is there anything we need to be aware of with the scope work in Stein that needs to be communicated to them? Not that I'm entirely aware of? Maybe some of the internals of Adjutant will need to change as to how Adjutant's admin user interacts with Keystone (as it likely will use system scope to have access to everything), but all the assignments that Adjutant manages for users are in project scope. Need to really play with and figure this out, not sure how much is really changing that will cause us pain. From lijie at unitedstack.com Mon Sep 10 12:17:23 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 10 Sep 2018 20:17:23 +0800 Subject: [openstack-dev] [docs][cinder] about cinder volume qos Message-ID: Hi,all At first,I find it is supported that we can define hard performance limits for each volume in doc.openstack.org[1].But only can define hard performance limits for each volume type in fact. Another, the note"As of the Nova 18.0.0 Rocky release, front end QoS settings are only supported when using the libvirt driver.",in fact, we have supported the front end QoS settings when using the libvirt driver previous. Is the document wrong?Can you tell me more about this ?Thank you very much. [1]https://docs.openstack.org/cinder/latest/admin/blockstorage-basic-volume-qos.html Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Sep 10 12:19:05 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 10 Sep 2018 21:19:05 +0900 Subject: [openstack-dev] [election][tc] Opinion about 'PTL' tooling In-Reply-To: <1536579024-sup-8338@lrrr.local> References: <7846-5b965200-7-68cc9800@21942213> <1536579024-sup-8338@lrrr.local> Message-ID: <165c36bc767.d0ec1dc4141804.2411531936340666307@ghanshyammann.com> ---- On Mon, 10 Sep 2018 20:31:11 +0900 Doug Hellmann wrote ---- > Excerpts from jean-philippe at evrard.me's message of 2018-09-10 13:15:02 +0200: > > Hello everyone, > > > > In my candidacy [1], I mentioned that the TC should provide more tools to help the PTLs at their duties, for example to track community health. > > > > I have questions for the TC candidates: > > - What is your opinion about said toolkit? Do you see a purpose for it? > > - Do you think said toolkit should fall under the TC umbrella? > > > > After my discussion with Rico Lin (PTL of the Heat project, and TC candidate) yesterday, I am personally convinced that it would be a good idea, and that we should have those tools: As a PTL (but also any person interested to see health of projects) I wanted it and I am not alone. PTLs are focusing on their duties and, as a day is only composed of so few hours, it is possible they won't have the focus to work on said tools to track, in the longer term, the community. > > > > For me, tracking community health (and therefore a toolkit for the PTLs/community) is something TC should cover for good governance, and I am not aware of any tooling extracting metrics that can be easily visible and used by anyone. If each project started to have their own implementation of tools, it would be opposite to one of my other goals, which is the simplification of OpenStack. > > > > Thanks for reading me, and do not hesitate to ask me questions on the mailing lists, or in real life during the PTG! > > > > Regards, > > Jean-Philippe Evrard (evrardjp) > > > > [1]: https://git.openstack.org/cgit/openstack/election/plain/candidates/stein/TC/jean-philippe at evrard.me > > > > We've had several different sets of scripts at different times to > extract review statistics from gerrit. Is that the sort of thing you > mean? > > What information would you find useful? Yeah, if we can have exact requirement or Action items PTL miss then, it will be more clear about having such tooling. Overall I like the idea of giving more awareness about PTL work but that is more kind of teaching and guiding the PTL. Before we think of tool to manage PTL responsibility, we need to list down the issues it will solve. Personally as PTL, I have gone through PTL responsibility guide[1] and filtered the PTL tagged email which i daily check on priority. Further I follow the TODO things PTL has to do in release, PTG, Summit etc. which work perfectly for me. I find this more PTL responsibility than TC track those for PTL. That's my point of view as PTL and as TC candidate but i would like to hear from other PTLs on this if they need help on their responsibility tracking and why. [1] https://docs.openstack.org/project-team-guide/ptl.html > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From rico.lin.guanyu at gmail.com Mon Sep 10 12:29:27 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 10 Sep 2018 06:29:27 -0600 Subject: [openstack-dev] [election][tc] Opinion about 'PTL' tooling In-Reply-To: <1536579024-sup-8338@lrrr.local> References: <7846-5b965200-7-68cc9800@21942213> <1536579024-sup-8338@lrrr.local> Message-ID: On Mon, Sep 10, 2018 at 5:31 AM Doug Hellmann wrote: > Excerpts from jean-philippe at evrard.me's message of 2018-09-10 13:15:02 > +0200: > > Hello everyone, > > > > In my candidacy [1], I mentioned that the TC should provide more tools > to help the PTLs at their duties, for example to track community health. > > > > I have questions for the TC candidates: > > - What is your opinion about said toolkit? Do you see a purpose for it? > > - Do you think said toolkit should fall under the TC umbrella? > > > > After my discussion with Rico Lin (PTL of the Heat project, and TC > candidate) yesterday, I am personally convinced that it would be a good > idea, and that we should have those tools: As a PTL (but also any person > interested to see health of projects) I wanted it and I am not alone. PTLs > are focusing on their duties and, as a day is only composed of so few > hours, it is possible they won't have the focus to work on said tools to > track, in the longer term, the community. > > > > For me, tracking community health (and therefore a toolkit for the > PTLs/community) is something TC should cover for good governance, and I am > not aware of any tooling extracting metrics that can be easily visible and > used by anyone. If each project started to have their own implementation of > tools, it would be opposite to one of my other goals, which is the > simplification of OpenStack. > > > > Thanks for reading me, and do not hesitate to ask me questions on the > mailing lists, or in real life during the PTG! > > > > Regards, > > Jean-Philippe Evrard (evrardjp) > > > > [1]: > https://git.openstack.org/cgit/openstack/election/plain/candidates/stein/TC/jean-philippe at evrard.me > > > > We've had several different sets of scripts at different times to > extract review statistics from gerrit. Is that the sort of thing you > mean? > > What information would you find useful? > First of all, I know I'm awake because jet lag, but it's surprise to see you all are too! Are you guys really in Denver!? or just some cardboard cut-out I saw! Okay, let's back to the mail. As a PTL (not like a good one, but try to do what I can and learn from others), I do see the benifit to have tool kit to properly alarm (or show to) PTL about how people been in projects. As checking the health of projects been a big task for TCs for last cycle, I believe this might be something we can further discussion in that TCs task. Right now we're asking TCs to asisit team to get a health report. But if we can provide a list of tools that mgith help PTLs (or cores) to generate some information to see the health situation. so PTLs can see how's things going after they adjust their strategies. For toolkits, I believe there're already something we can collect for PTLs? So we can use what already there and make sure we don't over taken everyone's time for this task. I aware there are challenges when we talk about how to make nwe-join people feel good and how can we help PTLs (with experiences or not) to adjust their way or to get better communications cross projects so PTLs will get a chances to share and learn from others if they see any improvement also applied to their team as well. Also I agree with Doug that it's improtant to bring this idea on table and discuss about what exactly information we want to get from data. And what information TCs feel helpful to track health condition. Now this bring me some idea of suggestion for all that I think it's time to renew some documentation in https://docs.openstack.org/project-team-guide/ptl.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Sep 10 12:38:11 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 10 Sep 2018 06:38:11 -0600 Subject: [openstack-dev] [election][tc] Opinion about 'PTL' tooling In-Reply-To: References: <7846-5b965200-7-68cc9800@21942213> <1536579024-sup-8338@lrrr.local> Message-ID: <0D98A746-F005-4D72-8F40-B900378D8D62@vexxhost.com> I think something we should take into consideration is *what* you consider health because the way we’ve gone about it over health checks is not something that can become a toolkit because it was more of question asking, etc Sent from my iPhone > On Sep 10, 2018, at 6:29 AM, Rico Lin wrote: > > > >> On Mon, Sep 10, 2018 at 5:31 AM Doug Hellmann wrote: >> Excerpts from jean-philippe at evrard.me's message of 2018-09-10 13:15:02 +0200: >> > Hello everyone, >> > >> > In my candidacy [1], I mentioned that the TC should provide more tools to help the PTLs at their duties, for example to track community health. >> > >> > I have questions for the TC candidates: >> > - What is your opinion about said toolkit? Do you see a purpose for it? >> > - Do you think said toolkit should fall under the TC umbrella? >> > >> > After my discussion with Rico Lin (PTL of the Heat project, and TC candidate) yesterday, I am personally convinced that it would be a good idea, and that we should have those tools: As a PTL (but also any person interested to see health of projects) I wanted it and I am not alone. PTLs are focusing on their duties and, as a day is only composed of so few hours, it is possible they won't have the focus to work on said tools to track, in the longer term, the community. >> > >> > For me, tracking community health (and therefore a toolkit for the PTLs/community) is something TC should cover for good governance, and I am not aware of any tooling extracting metrics that can be easily visible and used by anyone. If each project started to have their own implementation of tools, it would be opposite to one of my other goals, which is the simplification of OpenStack. >> > >> > Thanks for reading me, and do not hesitate to ask me questions on the mailing lists, or in real life during the PTG! >> > >> > Regards, >> > Jean-Philippe Evrard (evrardjp) >> > >> > [1]: https://git.openstack.org/cgit/openstack/election/plain/candidates/stein/TC/jean-philippe at evrard.me >> > >> >> We've had several different sets of scripts at different times to >> extract review statistics from gerrit. Is that the sort of thing you >> mean? >> >> What information would you find useful? > > First of all, I know I'm awake because jet lag, but it's surprise to see you all are too! Are you guys really in Denver!? or just some cardboard cut-out I saw! > > Okay, let's back to the mail. > As a PTL (not like a good one, but try to do what I can and learn from others), I do see the benifit to have tool kit > to properly alarm (or show to) PTL about how people been in projects. As checking the health of projects been a big task for TCs for last cycle, I believe this might be something we can further discussion in that TCs task. > > Right now we're asking TCs to asisit team to get a health report. But if we can provide a list of tools that mgith help PTLs (or cores) to generate some information to see the health situation. so PTLs can see how's things going after they adjust their strategies. For toolkits, I believe there're already something we can collect for PTLs? So we can use what already there and make sure we don't over taken everyone's time for this task. > I aware there are challenges when we talk about how to make nwe-join people feel good and how can we help PTLs (with experiences or not) to adjust their way or to get better communications cross projects so PTLs will get a chances to share and learn from others if they see any improvement also applied to their team as well. > > > Also I agree with Doug that it's improtant to bring this idea on table and discuss about what exactly information we want to get from data. And what information TCs feel helpful to track health condition. > > > Now this bring me some idea of suggestion for all that I think it's time to renew some documentation in https://docs.openstack.org/project-team-guide/ptl.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Sep 10 13:07:48 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 Sep 2018 13:07:48 +0000 Subject: [openstack-dev] [election][tc] Opinion about 'PTL' tooling In-Reply-To: <0D98A746-F005-4D72-8F40-B900378D8D62@vexxhost.com> References: <7846-5b965200-7-68cc9800@21942213> <1536579024-sup-8338@lrrr.local> <0D98A746-F005-4D72-8F40-B900378D8D62@vexxhost.com> Message-ID: <20180910130748.zukcbjcdz2dhicow@yuggoth.org> On 2018-09-10 06:38:11 -0600 (-0600), Mohammed Naser wrote: > I think something we should take into consideration is *what* you > consider health because the way we’ve gone about it over health > checks is not something that can become a toolkit because it was > more of question asking, etc [...] I was going to follow up with something similar. It's not as if the TC has a toolkit of any sort at this point to come up with the information we're assembling in the health tracker either. It's built up from interviewing PTLs, reading meeting logs, looking at the changes which merge to teams' various deliverable repositories, asking around as to whether they've missed important deadlines such as release milestones (depending on what release models they follow) or PTL nominations, looking over cycle goals to see how far along they are, and so on. Extremely time-consuming which is why it's taken us most of a release cycle and we still haven't finished a first pass. Assembling some of this information might be automatable if we make adjustments to how the data/processes on which it's based are maintained, but at this point we're not even sure which ones are problem indicators at all and are just trying to provide the clearest picture we can. If we come up with a detailed checklist and some of the checks on that list can be automated in some way, that seems like a good thing. However, the original data should be publicly accessible so I don't see why it needs to be members of the technical committee who write the software to collect that. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From bdobreli at redhat.com Mon Sep 10 13:38:24 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 10 Sep 2018 15:38:24 +0200 Subject: [openstack-dev] [tripleo] quickstart for humans In-Reply-To: References: <20180830142821.gw76edbscvhh3afp@localhost.localdomain> <4cd2fafa-f644-1c1f-56e4-010d1360cf04@redhat.com> Message-ID: <71812414-839d-3f93-e9bf-df736d5b93c6@redhat.com> On 8/31/18 6:03 PM, Raoul Scarazzini wrote: > On 8/31/18 12:07 PM, Jiří Stránský wrote: > [...] >> * "for humans" definition differs significantly based on who you ask. >> E.g. my intention with [2] was to readily expose *more* knobs and tweaks >> and be more transparent with the underlying workings of Ansible, because >> i felt like quickstart.sh hides too much from me. In my opinion [2] is >> sufficiently "for humans", yet it does pretty much the opposite of what >> you're looking for. > > Hey Jiri, > I think that "for humans" means simply that you launch the command with > just one parameter (i.e. the virthost), and then you have something. yes, this ^^ I'd also add one more thing: if you later remove that something while having the virthost as your localhost, and the non root user as your current logged-in user, you remain operational :) Teardown is quite destructive for CI, which might be not applicable for devboxes running on a laptop. I have a few changes [0] in work for addressing that case. [0] https://review.openstack.org/#/q/topic:localcon+(status:open+OR+status:merged) > And because of this I think here is just a matter of concentrate the > efforts to turn back quickstart.sh to its original scope: making you > launch it with just one parameter and have an available environment > after a while (OK, sometimes more than a while). > Since part of the recent discussions were around the hypotheses of > removing it, maybe we can think about make it useful again. Mostly > because it is right that the needs of everyone are different, but on the > other side with a solid starting point (the default) you can think about > customizing depending on your needs. > I'm for recycling what we have, planet (and me) will enjoy it! > > My 0,0000002 cents. > -- Best regards, Bogdan Dobrelya, Irc #bogdando From adam at sotk.co.uk Mon Sep 10 13:43:18 2018 From: adam at sotk.co.uk (Adam Coldrick) Date: Mon, 10 Sep 2018 14:43:18 +0100 Subject: [openstack-dev] [StoryBoard] Project Update/Some New Things Message-ID: <1536586998.2089.20.camel@sotk.co.uk> Hi all, Its been a long while since a "project update" type email about StoryBoard, but over the past few months we merged some patches which either are worth mentioning or changed things in ways that would benefit from some explanation. # Linking to projects by name Keen observers might've noticed that StoryBoard recently grew the ability to link to projects by name, rather than by ID number. All the links to projects in the UI have been replaced with links in this form, and its probably a good idea for folk to start using them in any documentation they have. These links look like https://storyboard.openstack.org/#!/project/openstack-infra/storyboard # Linking to project groups by name More recently (yesterday) it also became possible to similarly link to project groups by name rather than ID number. Links to project groups in the UI have been replaced with links in this form, and again if you have any documentation using the ID links it might be nice to update to using the name. These links look like https://storyboard.openstack.org/#!/project_group/storyboard # Finding stories from a task ID It is now possible to navigate to a story given just a task ID, if for whatever reason that's all the information you have available. A link like https://storyboard.openstack.org/#!/task/12389 will work. This will redirect to the story containing the task, and is the first part of work to support linking directly to an individual task in a story. # UI updates There have been some visual changes in the webclient's user interface to attempt to make things generally feel nicer. This work is also not totally finished and there are further changes planned. One big change is that the "Profile" button in the sidebar used for setting preferences and managing access tokens has gone away. The URLs used to access this functionality are unchanged, but the links in the UI can now be found by clicking the user name in the top-right to open the dropdown menu, which previously only contained the log out button. # Bugfixes Various minor bugs and annoyances have also been addressed. For example search should behave somewhat more predictably than it did at the start of the year, syntax highlighting actually works again, and the markdown parser should be less aggressive in its swallowing of line breaks. Stay tuned in the future for more fixes and features, and feel free to pop into #storyboard with any questions or comments you may have. We're always open to suggestions and even more open to patches! We also have a PTG session on Tuesday afternoon, currently intended to be in Blanca Peak. Feel free to drop by to join the discussions and/or add to the etherpad[0]. Best regards, Adam (SotK) [0]: https://etherpad.openstack.org/p/sb-stein-ptg-planning From mbultel at redhat.com Mon Sep 10 14:04:50 2018 From: mbultel at redhat.com (mathieu bultel) Date: Mon, 10 Sep 2018 16:04:50 +0200 Subject: [openstack-dev] [TripleO] Plan management refactoring for Life cycle Message-ID: <03885d00-c63c-91ae-af99-c8d89ae4d7c4@redhat.com> Hi folks, Last week I wrote a BluePrint and a spec [1] to propose to change the way we used and managed the Plan in TripleO for the Deployment and the Life cycle (update/upgrade and scale). While I was working on trying to simplified the implementation of the Update and Upgrade for a end user usage, I found very hard to follow all the calls that the TripleO Client was doing to the HeatClient and SwiftClient. I traced the calls and found that we can safely and easily decrease the number of calls and simplified the way that we are computing & rendering the TripleO Heat Templates files. I did a PoC to see what would be the problematic behind that and what we could do without breaking the "standard" usage and all the particular things that the current code handle (specific deployments and configurations & so on). By this refactoring I'm seeing another gain for the life cycle part of TripleO, where we used to try to make thing simpler & safer but we constantly failed due to this complexity and all the "special cases" that we faced during the testing. The result is that, when a user need to perform an update/upgrade of his deployment, he really have to be careful, to pay a lot of attention of all the options, -e environments files that he previously used with the risk to make a simple mistake, and totally mess up the deployment. So my goals with this PoC and this BP is to try to addressed those points by: simplify and reduce the number of calls between the clients, have a simple way for creating and updating the Plan, even by amending the plan with only a particular files / config or Playbooks, store all the in formations provided by the user by uploading all the files outsides of the plan, keep the track of the environment files passed to the CLI, trace the life cycle story of the deployment. So feel free to comment, add your concerns or feedback around this. Cheer, Mathieu [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-plan-management https://review.openstack.org/599396 [2] https://review.openstack.org/583145 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Sep 10 14:34:49 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 Sep 2018 14:34:49 +0000 Subject: [openstack-dev] [OpenStack-Infra] [StoryBoard] Project Update/Some New Things In-Reply-To: <1536586998.2089.20.camel@sotk.co.uk> References: <1536586998.2089.20.camel@sotk.co.uk> Message-ID: <20180910143449.ycbttjla2tn4ysql@yuggoth.org> On 2018-09-10 14:43:18 +0100 (+0100), Adam Coldrick wrote: [...] > # Linking to projects by name > > Keen observers might've noticed that StoryBoard recently grew the ability > to link to projects by name, rather than by ID number. All the links to > projects in the UI have been replaced with links in this form, and its > probably a good idea for folk to start using them in any documentation > they have. These links look like > > https://storyboard.openstack.org/#!/project/openstack-infra/storyboard [...] Worth noting, this has made it harder to find the numeric project ID without falling back on the API. Change https://review.openstack.org/600893 merged to the releases repository yesterday allowing deliverable repositories to be referenced by their StoryBoard project name rather than only the ID number. If there are other places in tooling and automation where we relied on the project ID number, work should be done to update those similarly. > # Finding stories from a task ID > > It is now possible to navigate to a story given just a task ID, if for > whatever reason that's all the information you have available. A link like > > https://storyboard.openstack.org/#!/task/12389 > > will work. This will redirect to the story containing the task, and is the > first part of work to support linking directly to an individual task in a > story. [...] As an aside, I think this makes it possible now for us to start hyperlinking Task footers in commit messages within the Gerrit change view. I'll try and figure out what we need to adjust in our Gerrit commentlink and its-storyboard plugin configs to make that happen. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Mon Sep 10 15:25:45 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 10 Sep 2018 10:25:45 -0500 Subject: [openstack-dev] [nova][cinder] about unified limits In-Reply-To: References: <808519e4-8b10-26a1-e069-f67a1aafd2f1@nemebean.com> Message-ID: <34ce3a03-84d8-cfa3-cec3-b4254975c4ba@nemebean.com> We had talked about Tuesday afternoon. I need to sync up with Lance and figure out exactly when will work best. On 09/08/2018 10:58 AM, Jay S Bryant wrote: > Ben, > > Ping me when you are planning on having this discussion if you think of > it.  Since there is interest in this for Cinder I would like to try to > be there. > > Thanks! > > Jay > > > On 9/7/2018 1:43 PM, Ben Nemec wrote: >> I will also note that I had an oslo.limit topic on the Oslo PTG >> schedule: https://etherpad.openstack.org/p/oslo-stein-ptg-planning >> >> I don't know whether anybody from Jaze's team will be there, but if so >> that would be a good opportunity for some face-to-face discussion. I >> didn't give it a whole lot of time, but I'm open to extending it if >> that would be helpful. >> >> On 09/07/2018 01:34 PM, Lance Bragstad wrote: >>> That would be great! I can break down the work a little bit to help >>> describe where we are at with different parts of the initiative. >>> Hopefully it will be useful for your colleagues in case they haven't >>> been closely following the effort. >>> >>> # keystone >>> >>> Based on the initial note in this thread, I'm sure you're aware of >>> keystone's status with respect to unified limits. But to recap, the >>> initial implementation landed in Queens and targeted flat enforcement >>> [0]. During the Rocky PTG we sat down with other services and a few >>> operators to explain the current status in keystone and if either >>> developers or operators had feedback on the API specifically. Notes >>> were captured in etherpad [1]. We spent the Rocky cycle fixing >>> usability issues with the API [2] and implementing support for a >>> hierarchical enforcement model [3]. >>> >>> At this point keystone is ready for services to start consuming the >>> unified limits work. The unified limits API is still marked as stable >>> and it will likely stay that way until we have at least one project >>> using unified limits. We can use that as an opportunity to do a final >>> flush of any changes that need to be made to the API before fully >>> supporting it. The keystone team expects that to be a quick >>> transition, as we don't want to keep the API hanging in an >>> experimental state. It's really just a safe guard to make sure we >>> have the opportunity to use it in another service before fully >>> committing to the API. Ultimately, we don't want to prematurely mark >>> the API as supported when other services aren't even using it yet, >>> and then realize it has issues that could have been fixed prior to >>> the adoption phase. >>> >>> # oslo.limit >>> >>> In parallel with the keystone work, we created a new library to aid >>> services in consuming limits. Currently, the sole purpose of >>> oslo.limit is to abstract project and project hierarchy information >>> away from the service, so that services don't have to reimplement >>> client code to understand project trees, which could arguably become >>> complex and lead to inconsistencies in u-x across services. >>> >>> Ideally, a service should be able to pass some relatively basic >>> information to oslo.limit and expect an answer on whether or not >>> usage for that claim is valid. For example, here is a project ID, >>> resource name, and resource quantity, tell me if this project is over >>> it's associated limit or default limit. >>> >>> We're currently working on implementing the enforcement bits of >>> oslo.limit, which requires making API calls to keystone in order to >>> retrieve the deployed enforcement model, limit information, and >>> project hierarchies. Then it needs to reason about those things and >>> calculate usage from the service in order to determine if the request >>> claim is valid or not. There are patches up for this work, and >>> reviews are always welcome [4]. >>> >>> Note that we haven't released oslo.limit yet, but once the basic >>> enforcement described above is implemented we will. Then services can >>> officially pull it into their code as a dependency and we can work >>> out remaining bugs in both keystone and oslo.limit. Once we're >>> confident in both the API and the library, we'll bump oslo.limit to >>> version 1.0 at the same time we graduate the unified limits API from >>> "experimental" to "supported". Note that oslo libraries <1.0 are >>> considered experimental, which fits nicely with the unified limit API >>> being experimental as we shake out usability issues in both pieces of >>> software. >>> >>> # services >>> >>> Finally, we'll be in a position to start integrating oslo.limit into >>> services. I imagine this to be a coordinated effort between keystone, >>> oslo, and service developers. I do have a patch up that adds a >>> conceptual overview for developers consuming oslo.limit [5], which >>> renders into [6]. >>> >>> To be honest, this is going to be a very large piece of work and it's >>> going to require a lot of communication. In my opinion, I think we >>> can use the first couple iterations to generate some well-written >>> usage documentation. Any questions coming from developers in this >>> phase should probably be answered in documentation if we want to >>> enable folks to pick this up and run with it. Otherwise, I could see >>> the handful of people pushing the effort becoming a bottle neck in >>> adoption. >>> >>> Hopefully this helps paint the landscape of where things are >>> currently with respect to each piece. As always, let me know if you >>> have any additional questions. If people want to discuss online, you >>> can find me, and other contributors familiar with this topic, in >>> #openstack-keystone or #openstack-dev on IRC (nic: lbragstad). >>> >>> [0] >>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html >>> >>> [1] https://etherpad.openstack.org/p/unified-limits-rocky-ptg >>> [2] https://tinyurl.com/y6ucarwm >>> [3] >>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/strict-two-level-enforcement-model.html >>> >>> [4] >>> https://review.openstack.org/#/q/project:openstack/oslo.limit+status:open >>> >>> [5] https://review.openstack.org/#/c/600265/ >>> [6] >>> http://logs.openstack.org/65/600265/3/check/openstack-tox-docs/a6bcf38/html/user/usage.html >>> >>> >>> On Thu, Sep 6, 2018 at 8:56 PM Jaze Lee >> > wrote: >>> >>>     Lance Bragstad > 于 >>>     2018年9月6日周四 下午10:01写道: >>>      > >>>      > I wish there was a better answer for this question, but currently >>>     there are only a handful of us working on the initiative. If you, or >>>     someone you know, is interested in getting involved, I'll happily >>>     help onboard people. >>> >>>     Well,I can recommend some my colleges to work on this. I wish in S, >>>     all service can use unified limits to do quota job. >>> >>>      > >>>      > On Wed, Sep 5, 2018 at 8:52 PM Jaze Lee >>     > wrote: >>>      >> >>>      >> On Stein only one service? >>>      >> Is there some methods to move this more fast? >>>      >> Lance Bragstad >>     > 于2018年9月5日周三 下午9:29写道: >>>      >> > >>>      >> > Not yet. Keystone worked through a bunch of usability >>>     improvements with the unified limits API last release and created >>>     the oslo.limit library. We have a patch or two left to land in >>>     oslo.limit before projects can really start using unified limits >>> [0]. >>>      >> > >>>      >> > We're hoping to get this working with at least one resource in >>>     another service (nova, cinder, etc...) in Stein. >>>      >> > >>>      >> > [0] >>> https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init >>> >>>      >> > >>>      >> > On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee >>     > wrote: >>>      >> >> >>>      >> >> Hello, >>>      >> >>     Does nova and cinder  use keystone's unified limits api >>>     to do quota job? >>>      >> >>     If not, is there a plan to do this? >>>      >> >>     Thanks a lot. >>>      >> >> >>>      >> >> -- >>>      >> >> 谦谦君子 >>>      >> >> >>>      >> >> >>> __________________________________________________________________________ >>> >>>      >> >> OpenStack Development Mailing List (not for usage questions) >>>      >> >> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>>      >> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>      >> > >>>      >> > >>> __________________________________________________________________________ >>> >>>      >> > OpenStack Development Mailing List (not for usage questions) >>>      >> > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>>      >> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>      >> >>>      >> >>>      >> >>>      >> -- >>>      >> 谦谦君子 >>>      >> >>>      >> >>> __________________________________________________________________________ >>> >>>      >> OpenStack Development Mailing List (not for usage questions) >>>      >> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>>      >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>      > >>>      > >>> __________________________________________________________________________ >>> >>>      > OpenStack Development Mailing List (not for usage questions) >>>      > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>>      > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>>     --     谦谦君子 >>> >>> __________________________________________________________________________ >>> >>>     OpenStack Development Mailing List (not for usage questions) >>>     Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Mon Sep 10 15:29:44 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 10 Sep 2018 09:29:44 -0600 Subject: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests In-Reply-To: <165bfbece69.b84e6226124507.1660751732214305006@ghanshyammann.com> References: <11be89ad-a59a-1fe6-5c7b-badb4a06e643@fried.cc> <1b586dfd-594f-3f44-b6f3-8b232aa0ab5b@fried.cc> <165b2ae50f1.11ea66ed588256.3160647352194636247@ghanshyammann.com> <12a19284-f459-c6a0-03cc-b300f04db777@gmail.com> <165bfbece69.b84e6226124507.1660751732214305006@ghanshyammann.com> Message-ID: On 9/9/2018 1:11 PM, Ghanshyam Mann wrote: > Yeah, Tempest would not fit as best location for such tagging or whitelist. I think nova may be better choice if nothing else. OK I've thrown this on the nova ptg etherpad agenda [1] for a misc item to talk about. [1] https://etherpad.openstack.org/p/nova-ptg-stein -- Thanks, Matt From jtomasek at redhat.com Mon Sep 10 16:11:46 2018 From: jtomasek at redhat.com (Jiri Tomasek) Date: Mon, 10 Sep 2018 10:11:46 -0600 Subject: [openstack-dev] [TripleO] Plan management refactoring for Life cycle In-Reply-To: <03885d00-c63c-91ae-af99-c8d89ae4d7c4@redhat.com> References: <03885d00-c63c-91ae-af99-c8d89ae4d7c4@redhat.com> Message-ID: Hi Mathieu, Thanks for bringing up the topic. There are several efforts currently in progress which should lead to solving the problems you're describing. We are working on introducing CLI commands which would perform the deployment configuration operations on deployment plan in Swift. This is a main step to finally reach CLI and GUI compatibility/interoperability. CLI will perform actions to configure deployment (roles, networks, environments selection, parameters setting etc.) by calling Mistral workflows which store the information in deployment plan in Swift. The result is that all the information which define the deployment are stored in central place - deployment plan in Swift and the deploy command is turned into simple 'openstack overcloud deploy'. Deployment plan then has plan-environment.yaml which has the list of environments used and customized parameter values, roles-data.yaml which carry roles definition and network-data.yaml which carry networks definition. The information stored in these files (and deployment plan in general) can then be treated as source of information about deployment. The deployment can then be easily exported and reliably replicated. Here is the document which we put together to identify missing pieces between GUI,CLI and Mistral TripleO API. We'll use this to discuss the topic at PTG this week and define work needed to be done to achieve the complete interoperability. [1] Also there is a pending patch from Steven Hardy which aims to remove CLI specific environments merging which should fix the problem with tracking of the environments used with CLI deployment. [2] [1] https://gist.github.com/jtomasek/8c2ae6118be0823784cdafebd9c0edac (Apologies for inconvenient format, I'll try to update this to better/editable format. Original doc: https://docs.google.com/spreadsheets/d/1ERfx2rnPq6VjkJ62JlA_E6jFuHt9vVl3j95dg6-mZBM/edit?usp=sharing ) [2] https://review.openstack.org/#/c/448209/ -- Jirka On Mon, Sep 10, 2018 at 8:05 AM mathieu bultel wrote: > Hi folks, > > Last week I wrote a BluePrint and a spec [1] to propose to change the way > we used and managed the Plan in TripleO for the Deployment and the Life > cycle (update/upgrade and scale). > > While I was working on trying to simplified the implementation of the > Update and Upgrade for a end user usage, I found very hard to follow all > the calls that the TripleO Client was doing to the HeatClient and > SwiftClient. > > I traced the calls and found that we can safely and easily decrease the > number of calls and simplified the way that we are computing & rendering > the TripleO Heat Templates files. > > I did a PoC to see what would be the problematic behind that and what we > could do without breaking the "standard" usage and all the particular > things that the current code handle (specific deployments and > configurations & so on). > > By this refactoring I'm seeing another gain for the life cycle part of > TripleO, where we used to try to make thing simpler & safer but we > constantly failed due to this complexity and all the "special cases" that > we faced during the testing. > > The result is that, when a user need to perform an update/upgrade of his > deployment, he really have to be careful, to pay a lot of attention of all > the options, -e environments files that he previously used with the risk > to make a simple mistake, and totally mess up the deployment. > > So my goals with this PoC and this BP is to try to addressed those points > by: > > simplify and reduce the number of calls between the clients, > > have a simple way for creating and updating the Plan, even by amending the > plan with only a particular files / config or Playbooks, > > store all the in formations provided by the user by uploading all the > files outsides of the plan, > > keep the track of the environment files passed to the CLI, > > trace the life cycle story of the deployment. > > So feel free to comment, add your concerns or feedback around this. > > Cheer, > > Mathieu > > [1] > > https://blueprints.launchpad.net/tripleo/+spec/tripleo-plan-management > > https://review.openstack.org/599396 > > [2] > > https://review.openstack.org/583145 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Sep 10 16:14:58 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 10 Sep 2018 12:14:58 -0400 Subject: [openstack-dev] [OpenStack-Infra] [StoryBoard] Project Update/Some New Things In-Reply-To: <20180910143449.ycbttjla2tn4ysql@yuggoth.org> References: <1536586998.2089.20.camel@sotk.co.uk> <20180910143449.ycbttjla2tn4ysql@yuggoth.org> Message-ID: <46a0b2a0-61a6-4dba-9411-ced0310522b3@redhat.com> On 10/09/18 10:34 AM, Jeremy Stanley wrote: > On 2018-09-10 14:43:18 +0100 (+0100), Adam Coldrick wrote: > [...] >> # Linking to projects by name >> >> Keen observers might've noticed that StoryBoard recently grew the ability >> to link to projects by name, rather than by ID number. All the links to >> projects in the UI have been replaced with links in this form, and its >> probably a good idea for folk to start using them in any documentation >> they have. These links look like >> >> https://storyboard.openstack.org/#!/project/openstack-infra/storyboard Thanks for this!!! > [...] > > Worth noting, this has made it harder to find the numeric project ID > without falling back on the API. Change > https://review.openstack.org/600893 merged to the releases > repository yesterday allowing deliverable repositories to be > referenced by their StoryBoard project name rather than only the ID > number. If there are other places in tooling and automation where we > relied on the project ID number, work should be done to update those > similarly. In the docs configuration we use the ID for the generating the bugs link. We also rely on it being a numeric ID (as a string - it crashes if you use an int) rather than a string to determine whether the target is a launchpad project or a storyboard project. >> # Finding stories from a task ID >> >> It is now possible to navigate to a story given just a task ID, if for >> whatever reason that's all the information you have available. A link like >> >> https://storyboard.openstack.org/#!/task/12389 >> >> will work. This will redirect to the story containing the task, and is the >> first part of work to support linking directly to an individual task in a >> story. > [...] > > As an aside, I think this makes it possible now for us to start > hyperlinking Task footers in commit messages within the Gerrit > change view. I'll try and figure out what we need to adjust in our > Gerrit commentlink and its-storyboard plugin configs to make that > happen. +1 From gmann at ghanshyammann.com Mon Sep 10 16:22:00 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 11 Sep 2018 01:22:00 +0900 Subject: [openstack-dev] [QA] [all] QA Stein PTG Planning In-Reply-To: <165a8de52ee.126eac3cc30493.5374930562208326542@ghanshyammann.com> References: <165a8de52ee.126eac3cc30493.5374930562208326542@ghanshyammann.com> Message-ID: <165c44a2f53.e9d70d51156096.4344369541289185126@ghanshyammann.com> There are more Topic to discuss for QA at last movement [1]. I have added them in schedule and there are few topic has been re-scheduled due to that, please check the latest schedule for QA topic here[2] I have created the dedicated Etherpad for each topic, links are in main Etherpad[1]. Request all the Topic owner to fill the details on respective Etherpads well before the Topic Schedule. [1] https://etherpad.openstack.org/p/qa-stein-ptg [2] https://ethercalc.openstack.org/Stein-PTG-QA-Schedule -gmann ---- On Wed, 05 Sep 2018 17:34:27 +0900 Ghanshyam Mann wrote ---- > Hi All, > > As we are close to PTG, I have prepared the QA Stein PTG Schedule - > https://ethercalc.openstack.org/Stein-PTG-QA-Schedule > > Detail of each sessions can be found in this etherpad - > https://etherpad.openstack.org/p/qa-stein-ptg > > This time we will have QA Help Hour for 1 day only which is on Monday and next 3 days for specific topic discussion and code sprint. > We still have space for more sessions or topic if any of you would like to add. If so please write those to etherpad with your irc name. > Sessions Scheduled is flexible and we can reschedule based on request but do let me know before 7th Sept. > > If anyone cannot travel to PTG and would like to attend remotely, do let me know i can plan something for remote participation. > > -gmann > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From adam at sotk.co.uk Mon Sep 10 16:32:38 2018 From: adam at sotk.co.uk (Adam Coldrick) Date: Mon, 10 Sep 2018 17:32:38 +0100 Subject: [openstack-dev] [OpenStack-Infra] [StoryBoard] Project Update/Some New Things In-Reply-To: <46a0b2a0-61a6-4dba-9411-ced0310522b3@redhat.com> References: <1536586998.2089.20.camel@sotk.co.uk> <20180910143449.ycbttjla2tn4ysql@yuggoth.org> <46a0b2a0-61a6-4dba-9411-ced0310522b3@redhat.com> Message-ID: <1536597158.2089.22.camel@sotk.co.uk> On Mon, 2018-09-10 at 12:14 -0400, Zane Bitter wrote: > On 10/09/18 10:34 AM, Jeremy Stanley wrote: > > On 2018-09-10 14:43:18 +0100 (+0100), Adam Coldrick wrote: > > [...] > > > # Linking to projects by name > > > > > > Keen observers might've noticed that StoryBoard recently grew the > > > ability > > > to link to projects by name, rather than by ID number. All the links > > > to > > > projects in the UI have been replaced with links in this form, and > > > its > > > probably a good idea for folk to start using them in any > > > documentation > > > they have. These links look like > > > > > >    https://storyboard.openstack.org/#!/project/openstack-infra/story > > > board > > Thanks for this!!! > > > [...] > > > > Worth noting, this has made it harder to find the numeric project ID > > without falling back on the API. Change > > https://review.openstack.org/600893 merged to the releases > > repository yesterday allowing deliverable repositories to be > > referenced by their StoryBoard project name rather than only the ID > > number. If there are other places in tooling and automation where we > > relied on the project ID number, work should be done to update those > > similarly. > > In the docs configuration we use the ID for the generating the bugs  > link. We also rely on it being a numeric ID (as a string - it crashes > if  > you use an int) rather than a string to determine whether the target is  > a launchpad project or a storyboard project. If it'll be a big task to change this, I'm happy to make the ID more discoverable from the StoryBoard web UI so that it isn't painful for folk in the meantime. From duc.openstack at gmail.com Mon Sep 10 16:56:10 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Mon, 10 Sep 2018 09:56:10 -0700 Subject: [openstack-dev] [senlin][stable] Nominating chenyb4 to Senlin Stable Maintainers Team Message-ID: Hi Senlin Stable Team, I would like to nominate Yuanbin Chen (chenyb4) to the Senlin stable review team. Yuanbin has been doing stable reviews and shown that he understands the policy for merging stable patches [1]. Voting is open for 7 days. Please reply with your +1 vote in favor or -1 as a veto vote. [1] https://review.openstack.org/#/q/branch:%255Estable/.*+reviewedby:cybing4%2540gmail.com Regards, Duc (dtruong) From aschultz at redhat.com Mon Sep 10 16:58:38 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 10 Sep 2018 10:58:38 -0600 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do In-Reply-To: References: Message-ID: I just realized I booked the room and put it in the etherpad but forgot to email out the time. Time: Tuesday 09:00-10:45 Room: Big Thompson https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg Thanks, -Alex On Tue, Sep 4, 2018 at 1:03 PM, Alex Schultz wrote: > On Thu, Aug 9, 2018 at 2:43 PM, Mohammed Naser wrote: >> Hi Alex, >> >> I am very much in favour of what you're bringing up. We do have >> multiple projects that leverage Ansible in different ways and we all >> end up doing the same thing at the end. The duplication of work is >> not really beneficial for us as it takes away from our use-cases. >> >> I believe that there is a certain number of steps that we all share >> regardless of how we deploy, some of the things that come up to me >> right away are: >> >> - Configuring infrastructure services (i.e.: create vhosts for service >> in rabbitmq, create databases for services, configure users for >> rabbitmq, db, etc) >> - Configuring inter-OpenStack services (i.e. keystone_authtoken >> section, creating endpoints, etc and users for services) >> - Configuring actual OpenStack services (i.e. >> /etc//.conf file with the ability of extending >> options) >> - Running CI/integration on a cloud (i.e. common role that literally >> gets an admin user, password and auth endpoint and creates all >> resources and does CI) >> >> This would deduplicate a lot of work, and especially the last one, it >> might be beneficial for more than Ansible-based projects, I can >> imagine Puppet OpenStack leveraging this as well inside Zuul CI >> (optionally)... However, I think that this something which we should >> discus further for the PTG. I think that there will be a tiny bit >> upfront work as we all standarize but then it's a win for all involved >> communities. >> >> I would like to propose that deployment tools maybe sit down together >> at the PTG, all share how we use Ansible to accomplish these tasks and >> then perhaps we can work all together on abstracting some of these >> concepts together for us to all leverage. >> > > I'm currently trying to get a spot on Tuesday morning to further > discuss some of this items. In the mean time I've started an > etherpad[0] to start collecting ideas for things to discuss. At the > moment I've got the tempest role collaboration and some basic ideas > for best practice items that we can discuss. Feel free to add your > own and I'll update the etherpad with a time slot when I get one > nailed down. > > Thanks, > -Alex > > [0] https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg > >> I'll let others chime in as well. >> >> Regards, >> Mohammed >> >> On Thu, Aug 9, 2018 at 4:31 PM, Alex Schultz wrote: >>> Ahoy folks, >>> >>> I think it's time we come up with some basic rules/patterns on where >>> code lands when it comes to OpenStack related Ansible roles and as we >>> convert/export things. There was a recent proposal to create an >>> ansible-role-tempest[0] that would take what we use in >>> tripleo-quickstart-extras[1] and separate it for re-usability by >>> others. So it was asked if we could work with the openstack-ansible >>> team and leverage the existing openstack-ansible-os_tempest[2]. It >>> turns out we have a few more already existing roles laying around as >>> well[3][4]. >>> >>> What I would like to propose is that we as a community come together >>> to agree on specific patterns so that we can leverage the same roles >>> for some of the core configuration/deployment functionality while >>> still allowing for specific project specific customization. What I've >>> noticed between all the project is that we have a few specific core >>> pieces of functionality that needs to be handled (or skipped as it may >>> be) for each service being deployed. >>> >>> 1) software installation >>> 2) configuration management >>> 3) service management >>> 4) misc service actions >>> >>> Depending on which flavor of the deployment you're using, the content >>> of each of these may be different. Just about the only thing that is >>> shared between them all would be the configuration management part. >>> To that, I was wondering if there would be a benefit to establishing a >>> pattern within say openstack-ansible where we can disable items #1 and >>> #3 but reuse #2 in projects like kolla/tripleo where we need to do >>> some configuration generation. If we can't establish a similar >>> pattern it'll make it harder to reuse and contribute between the >>> various projects. >>> >>> In tripleo we've recently created a bunch of ansible-role-tripleo-* >>> repositories which we were planning on moving the tripleo specific >>> tasks (for upgrades, etc) to and were hoping that we might be able to >>> reuse the upstream ansible roles similar to how we've previously >>> leverage the puppet openstack work for configurations. So for us, it >>> would be beneficial if we could maybe help align/contribute/guide the >>> configuration management and maybe misc service action portions of the >>> openstack-ansible roles, but be able to disable the actual software >>> install/service management as that would be managed via our >>> ansible-role-tripleo-* roles. >>> >>> Is this something that would be beneficial to further discuss at the >>> PTG? Anyone have any additional suggestions/thoughts? >>> >>> My personal thoughts for tripleo would be that we'd have >>> tripleo-ansible calls openstack-ansible- for core config but >>> package/service installation disabled and calls >>> ansible-role-tripleo- for tripleo specific actions such as >>> opinionated packages/service configuration/upgrades. Maybe this is >>> too complex? But at the same time, do we need to come up with 3 >>> different ways to do this? >>> >>> Thanks, >>> -Alex >>> >>> [0] https://review.openstack.org/#/c/589133/ >>> [1] http://git.openstack.org/cgit/openstack/tripleo-quickstart-extras/tree/roles/validate-tempest >>> [2] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest/ >>> [3] http://git.openstack.org/cgit/openstack/kolla-ansible/tree/ansible/roles/tempest >>> [4] http://git.openstack.org/cgit/openstack/ansible-role-tripleo-tempest >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -- >> Mohammed Naser — vexxhost >> ----------------------------------------------------- >> D. 514-316-8872 >> D. 800-910-1726 ext. 200 >> E. mnaser at vexxhost.com >> W. http://vexxhost.com >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From duc.openstack at gmail.com Mon Sep 10 16:59:25 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Mon, 10 Sep 2018 09:59:25 -0700 Subject: [openstack-dev] [senlin] Nominations to Senlin Core Team Message-ID: Hi Senlin Core Team, I would like to nominate 2 new core reviewers for Senlin: [1] Jude Cross (jucross at blizzard.com) [2] Erik Olof Gunnar Andersson (eandersson at blizzard.com) Jude has been doing a number of reviews and contributed some important patches to Senlin during the Rocky cycle that resolved locking problems. Erik has the most number of reviews in Rocky and has contributed high quality code reviews for some time. [1] http://stackalytics.com/?module=senlin-group&metric=marks&release=rocky&user_id=jucross at blizzard.com [2] http://stackalytics.com/?module=senlin-group&metric=marks&user_id=eandersson&release=rocky Voting is open for 7 days. Please reply with your +1 vote in favor or -1 as a veto vote. Regards, Duc (dtruong) From james.page at canonical.com Mon Sep 10 17:12:54 2018 From: james.page at canonical.com (James Page) Date: Mon, 10 Sep 2018 18:12:54 +0100 Subject: [openstack-dev] [charms][ptg] Stein PTG team dinner Message-ID: Hi All As outgoing PTL I have the honour of organising the team dinner for the Stein PTG this week. I'm proposing Wednesday night at Russell's Smokehouse: https://www.russellssmokehouse.com/ Let me know if you will be along (and if you have a +1) by end of today and I'll make the reservation! Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Mon Sep 10 17:31:05 2018 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Mon, 10 Sep 2018 11:31:05 -0600 Subject: [openstack-dev] [charms][ptg] Stein PTG team dinner In-Reply-To: References: Message-ID: Sounds great James. Excellent choice of restaurant, I'm in! (My +1 will probably be pre-occupied with other things that evening, so only count me for the reservation) On Mon, Sep 10, 2018 at 11:13 AM James Page wrote: > Hi All > > As outgoing PTL I have the honour of organising the team dinner for the > Stein PTG this week. > > I'm proposing Wednesday night at Russell's Smokehouse: > > https://www.russellssmokehouse.com/ > > Let me know if you will be along (and if you have a +1) by end of today > and I'll make the reservation! > > Cheers > > James > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Frode Nordahl Software Engineer Canonical Ltd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Sep 10 18:11:53 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 10 Sep 2018 12:11:53 -0600 Subject: [openstack-dev] [nova][cinder] about unified limits In-Reply-To: <34ce3a03-84d8-cfa3-cec3-b4254975c4ba@nemebean.com> References: <808519e4-8b10-26a1-e069-f67a1aafd2f1@nemebean.com> <34ce3a03-84d8-cfa3-cec3-b4254975c4ba@nemebean.com> Message-ID: We had an extensive discussion of this in the keystone room today: https://etherpad.openstack.org/p/keystone-stein-unified-limits We had a couple of Nova people in the room, so if anyone from other teams can take a look at the outcomes on the etherpad and let us know if there are any issues with the current plan for other projects that would be good. On 09/10/2018 09:25 AM, Ben Nemec wrote: > We had talked about Tuesday afternoon. I need to sync up with Lance and > figure out exactly when will work best. > > On 09/08/2018 10:58 AM, Jay S Bryant wrote: >> Ben, >> >> Ping me when you are planning on having this discussion if you think >> of it.  Since there is interest in this for Cinder I would like to try >> to be there. >> >> Thanks! >> >> Jay >> >> >> On 9/7/2018 1:43 PM, Ben Nemec wrote: >>> I will also note that I had an oslo.limit topic on the Oslo PTG >>> schedule: https://etherpad.openstack.org/p/oslo-stein-ptg-planning >>> >>> I don't know whether anybody from Jaze's team will be there, but if >>> so that would be a good opportunity for some face-to-face discussion. >>> I didn't give it a whole lot of time, but I'm open to extending it if >>> that would be helpful. >>> >>> On 09/07/2018 01:34 PM, Lance Bragstad wrote: >>>> That would be great! I can break down the work a little bit to help >>>> describe where we are at with different parts of the initiative. >>>> Hopefully it will be useful for your colleagues in case they haven't >>>> been closely following the effort. >>>> >>>> # keystone >>>> >>>> Based on the initial note in this thread, I'm sure you're aware of >>>> keystone's status with respect to unified limits. But to recap, the >>>> initial implementation landed in Queens and targeted flat >>>> enforcement [0]. During the Rocky PTG we sat down with other >>>> services and a few operators to explain the current status in >>>> keystone and if either developers or operators had feedback on the >>>> API specifically. Notes were captured in etherpad [1]. We spent the >>>> Rocky cycle fixing usability issues with the API [2] and >>>> implementing support for a hierarchical enforcement model [3]. >>>> >>>> At this point keystone is ready for services to start consuming the >>>> unified limits work. The unified limits API is still marked as >>>> stable and it will likely stay that way until we have at least one >>>> project using unified limits. We can use that as an opportunity to >>>> do a final flush of any changes that need to be made to the API >>>> before fully supporting it. The keystone team expects that to be a >>>> quick transition, as we don't want to keep the API hanging in an >>>> experimental state. It's really just a safe guard to make sure we >>>> have the opportunity to use it in another service before fully >>>> committing to the API. Ultimately, we don't want to prematurely mark >>>> the API as supported when other services aren't even using it yet, >>>> and then realize it has issues that could have been fixed prior to >>>> the adoption phase. >>>> >>>> # oslo.limit >>>> >>>> In parallel with the keystone work, we created a new library to aid >>>> services in consuming limits. Currently, the sole purpose of >>>> oslo.limit is to abstract project and project hierarchy information >>>> away from the service, so that services don't have to reimplement >>>> client code to understand project trees, which could arguably become >>>> complex and lead to inconsistencies in u-x across services. >>>> >>>> Ideally, a service should be able to pass some relatively basic >>>> information to oslo.limit and expect an answer on whether or not >>>> usage for that claim is valid. For example, here is a project ID, >>>> resource name, and resource quantity, tell me if this project is >>>> over it's associated limit or default limit. >>>> >>>> We're currently working on implementing the enforcement bits of >>>> oslo.limit, which requires making API calls to keystone in order to >>>> retrieve the deployed enforcement model, limit information, and >>>> project hierarchies. Then it needs to reason about those things and >>>> calculate usage from the service in order to determine if the >>>> request claim is valid or not. There are patches up for this work, >>>> and reviews are always welcome [4]. >>>> >>>> Note that we haven't released oslo.limit yet, but once the basic >>>> enforcement described above is implemented we will. Then services >>>> can officially pull it into their code as a dependency and we can >>>> work out remaining bugs in both keystone and oslo.limit. Once we're >>>> confident in both the API and the library, we'll bump oslo.limit to >>>> version 1.0 at the same time we graduate the unified limits API from >>>> "experimental" to "supported". Note that oslo libraries <1.0 are >>>> considered experimental, which fits nicely with the unified limit >>>> API being experimental as we shake out usability issues in both >>>> pieces of software. >>>> >>>> # services >>>> >>>> Finally, we'll be in a position to start integrating oslo.limit into >>>> services. I imagine this to be a coordinated effort between >>>> keystone, oslo, and service developers. I do have a patch up that >>>> adds a conceptual overview for developers consuming oslo.limit [5], >>>> which renders into [6]. >>>> >>>> To be honest, this is going to be a very large piece of work and >>>> it's going to require a lot of communication. In my opinion, I think >>>> we can use the first couple iterations to generate some well-written >>>> usage documentation. Any questions coming from developers in this >>>> phase should probably be answered in documentation if we want to >>>> enable folks to pick this up and run with it. Otherwise, I could see >>>> the handful of people pushing the effort becoming a bottle neck in >>>> adoption. >>>> >>>> Hopefully this helps paint the landscape of where things are >>>> currently with respect to each piece. As always, let me know if you >>>> have any additional questions. If people want to discuss online, you >>>> can find me, and other contributors familiar with this topic, in >>>> #openstack-keystone or #openstack-dev on IRC (nic: lbragstad). >>>> >>>> [0] >>>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html >>>> >>>> [1] https://etherpad.openstack.org/p/unified-limits-rocky-ptg >>>> [2] https://tinyurl.com/y6ucarwm >>>> [3] >>>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/strict-two-level-enforcement-model.html >>>> >>>> [4] >>>> https://review.openstack.org/#/q/project:openstack/oslo.limit+status:open >>>> >>>> [5] https://review.openstack.org/#/c/600265/ >>>> [6] >>>> http://logs.openstack.org/65/600265/3/check/openstack-tox-docs/a6bcf38/html/user/usage.html >>>> >>>> >>>> On Thu, Sep 6, 2018 at 8:56 PM Jaze Lee >>> > wrote: >>>> >>>>     Lance Bragstad >>> > 于 >>>>     2018年9月6日周四 下午10:01写道: >>>>      > >>>>      > I wish there was a better answer for this question, but >>>> currently >>>>     there are only a handful of us working on the initiative. If >>>> you, or >>>>     someone you know, is interested in getting involved, I'll happily >>>>     help onboard people. >>>> >>>>     Well,I can recommend some my colleges to work on this. I wish >>>> in S, >>>>     all service can use unified limits to do quota job. >>>> >>>>      > >>>>      > On Wed, Sep 5, 2018 at 8:52 PM Jaze Lee >>>     > wrote: >>>>      >> >>>>      >> On Stein only one service? >>>>      >> Is there some methods to move this more fast? >>>>      >> Lance Bragstad >>>     > 于2018年9月5日周三 下午9:29写道: >>>>      >> > >>>>      >> > Not yet. Keystone worked through a bunch of usability >>>>     improvements with the unified limits API last release and created >>>>     the oslo.limit library. We have a patch or two left to land in >>>>     oslo.limit before projects can really start using unified limits >>>> [0]. >>>>      >> > >>>>      >> > We're hoping to get this working with at least one >>>> resource in >>>>     another service (nova, cinder, etc...) in Stein. >>>>      >> > >>>>      >> > [0] >>>> https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init >>>> >>>>      >> > >>>>      >> > On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee >>>     > wrote: >>>>      >> >> >>>>      >> >> Hello, >>>>      >> >>     Does nova and cinder  use keystone's unified limits api >>>>     to do quota job? >>>>      >> >>     If not, is there a plan to do this? >>>>      >> >>     Thanks a lot. >>>>      >> >> >>>>      >> >> -- >>>>      >> >> 谦谦君子 >>>>      >> >> >>>>      >> >> >>>> __________________________________________________________________________ >>>> >>>>      >> >> OpenStack Development Mailing List (not for usage questions) >>>>      >> >> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >>>>      >> >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>      >> > >>>>      >> > >>>> __________________________________________________________________________ >>>> >>>>      >> > OpenStack Development Mailing List (not for usage questions) >>>>      >> > Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >>>>      >> > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>      >> >>>>      >> >>>>      >> >>>>      >> -- >>>>      >> 谦谦君子 >>>>      >> >>>>      >> >>>> __________________________________________________________________________ >>>> >>>>      >> OpenStack Development Mailing List (not for usage questions) >>>>      >> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >>>>      >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>      > >>>>      > >>>> __________________________________________________________________________ >>>> >>>>      > OpenStack Development Mailing List (not for usage questions) >>>>      > Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >>>>      > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>>     --     谦谦君子 >>>> >>>> __________________________________________________________________________ >>>> >>>>     OpenStack Development Mailing List (not for usage questions) >>>>     Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Mon Sep 10 19:48:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 10 Sep 2018 13:48:59 -0600 Subject: [openstack-dev] [goals][python3][nova] starting zuul migration for nova repos Message-ID: <1536608885-sup-3596@lrrr.local> Melanie gave me the go-ahead to propose the patches, so here's the list of patches for the zuul migration, doc job update, and python 3.6 unit tests for the nova repositories. +----------------------------------------------+--------------------------------+---------------+ | Subject | Repo | Branch | +----------------------------------------------+--------------------------------+---------------+ | remove job settings for nova repositories | openstack-infra/project-config | master | | import zuul job settings from project-config | openstack/nova | master | | switch documentation job to new PTI | openstack/nova | master | | add python 3.6 unit test job | openstack/nova | master | | import zuul job settings from project-config | openstack/nova | stable/ocata | | import zuul job settings from project-config | openstack/nova | stable/pike | | import zuul job settings from project-config | openstack/nova | stable/queens | | import zuul job settings from project-config | openstack/nova | stable/rocky | | import zuul job settings from project-config | openstack/nova-specs | master | | import zuul job settings from project-config | openstack/os-traits | master | | switch documentation job to new PTI | openstack/os-traits | master | | add python 3.6 unit test job | openstack/os-traits | master | | import zuul job settings from project-config | openstack/os-traits | stable/pike | | import zuul job settings from project-config | openstack/os-traits | stable/queens | | import zuul job settings from project-config | openstack/os-traits | stable/rocky | | import zuul job settings from project-config | openstack/os-vif | master | | switch documentation job to new PTI | openstack/os-vif | master | | add python 3.6 unit test job | openstack/os-vif | master | | import zuul job settings from project-config | openstack/os-vif | stable/ocata | | import zuul job settings from project-config | openstack/os-vif | stable/pike | | import zuul job settings from project-config | openstack/os-vif | stable/queens | | import zuul job settings from project-config | openstack/os-vif | stable/rocky | | import zuul job settings from project-config | openstack/osc-placement | master | | switch documentation job to new PTI | openstack/osc-placement | master | | add python 3.6 unit test job | openstack/osc-placement | master | | import zuul job settings from project-config | openstack/osc-placement | stable/queens | | import zuul job settings from project-config | openstack/osc-placement | stable/rocky | | import zuul job settings from project-config | openstack/python-novaclient | master | | switch documentation job to new PTI | openstack/python-novaclient | master | | add python 3.6 unit test job | openstack/python-novaclient | master | | add lib-forward-testing-python3 test job | openstack/python-novaclient | master | | import zuul job settings from project-config | openstack/python-novaclient | stable/ocata | | import zuul job settings from project-config | openstack/python-novaclient | stable/pike | | import zuul job settings from project-config | openstack/python-novaclient | stable/queens | | import zuul job settings from project-config | openstack/python-novaclient | stable/rocky | +----------------------------------------------+--------------------------------+---------------+ From jungleboyj at gmail.com Mon Sep 10 20:22:22 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 10 Sep 2018 15:22:22 -0500 Subject: [openstack-dev] [docs][cinder] about cinder volume qos In-Reply-To: References: Message-ID: On 9/10/2018 7:17 AM, Rambo wrote: > Hi,all > >       At first,I find it is supported that we can define hard > performance limits for each volume in doc.openstack.org[1].But only > can define hard performance limits for each volume type in fact. > Another, the note"As of the Nova 18.0.0 Rocky release, front end QoS > settings are only supported when using the libvirt driver.",in fact, > we have supported the front end QoS settings when using the libvirt > driver previous. Is the document wrong?Can you tell me more about this > ?Thank you very much. > > [1]https://docs.openstack.org/cinder/latest/admin/blockstorage-basic-volume-qos.html > > > Rambo, The performance limits are limited to a volume type as you need to have a volume type to be able to associate a QoS type with it.  So, that makes sense. As for the documentation, it is a little confusing the way that is worded but it isn't wrong.  So, QoS support thus far, including Nova 18.0.0, front end QoS setting only works with the libvirt driver.  I don't interpret that as meaning that there wasn't QoS support before that. Jay > > > > > > Best Regards > Rambo > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Sep 10 20:31:30 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 10 Sep 2018 15:31:30 -0500 Subject: [openstack-dev] [OpenStack-Infra] [StoryBoard] Project Update/Some New Things In-Reply-To: <46a0b2a0-61a6-4dba-9411-ced0310522b3@redhat.com> References: <1536586998.2089.20.camel@sotk.co.uk> <20180910143449.ycbttjla2tn4ysql@yuggoth.org> <46a0b2a0-61a6-4dba-9411-ced0310522b3@redhat.com> Message-ID: <44bc7f15-cf97-d419-8918-b12a4fa83e3e@gmail.com> On 9/10/2018 11:14 AM, Zane Bitter wrote: > On 10/09/18 10:34 AM, Jeremy Stanley wrote: >> On 2018-09-10 14:43:18 +0100 (+0100), Adam Coldrick wrote: >> [...] >>> # Linking to projects by name >>> >>> Keen observers might've noticed that StoryBoard recently grew the >>> ability >>> to link to projects by name, rather than by ID number. All the links to >>> projects in the UI have been replaced with links in this form, and its >>> probably a good idea for folk to start using them in any documentation >>> they have. These links look like >>> >>> https://storyboard.openstack.org/#!/project/openstack-infra/storyboard > > Thanks for this!!! +2  Thank you for addresing this! > >> [...] >> >> Worth noting, this has made it harder to find the numeric project ID >> without falling back on the API. Change >> https://review.openstack.org/600893 merged to the releases >> repository yesterday allowing deliverable repositories to be >> referenced by their StoryBoard project name rather than only the ID >> number. If there are other places in tooling and automation where we >> relied on the project ID number, work should be done to update those >> similarly. > > In the docs configuration we use the ID for the generating the bugs > link. We also rely on it being a numeric ID (as a string - it crashes > if you use an int) rather than a string to determine whether the > target is a launchpad project or a storyboard project. > >>> # Finding stories from a task ID >>> >>> It is now possible to navigate to a story given just a task ID, if for >>> whatever reason that's all the information you have available. A >>> link like >>> >>>    https://storyboard.openstack.org/#!/task/12389 >>> >>> will work. This will redirect to the story containing the task, and >>> is the >>> first part of work to support linking directly to an individual task >>> in a >>> story. >> [...] >> >> As an aside, I think this makes it possible now for us to start >> hyperlinking Task footers in commit messages within the Gerrit >> change view. I'll try and figure out what we need to adjust in our >> Gerrit commentlink and its-storyboard plugin configs to make that >> happen. > > +1 > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From samuel at cassi.ba Mon Sep 10 21:26:51 2018 From: samuel at cassi.ba (Samuel Cassiba) Date: Mon, 10 Sep 2018 14:26:51 -0700 Subject: [openstack-dev] [election][tc] Opinion about 'PTL' tooling In-Reply-To: <20180910130748.zukcbjcdz2dhicow@yuggoth.org> References: <7846-5b965200-7-68cc9800@21942213> <1536579024-sup-8338@lrrr.local> <0D98A746-F005-4D72-8F40-B900378D8D62@vexxhost.com> <20180910130748.zukcbjcdz2dhicow@yuggoth.org> Message-ID: On Mon, Sep 10, 2018 at 6:07 AM, Jeremy Stanley wrote: > On 2018-09-10 06:38:11 -0600 (-0600), Mohammed Naser wrote: >> I think something we should take into consideration is *what* you >> consider health because the way we’ve gone about it over health >> checks is not something that can become a toolkit because it was >> more of question asking, etc > [...] > > I was going to follow up with something similar. It's not as if the > TC has a toolkit of any sort at this point to come up with the > information we're assembling in the health tracker either. It's > built up from interviewing PTLs, reading meeting logs, looking at > the changes which merge to teams' various deliverable repositories, > asking around as to whether they've missed important deadlines such > as release milestones (depending on what release models they > follow) or PTL nominations, looking over cycle goals to see how far > along they are, and so on. Extremely time-consuming which is why > it's taken us most of a release cycle and we still haven't finished > a first pass. > > Assembling some of this information might be automatable if we make > adjustments to how the data/processes on which it's based are > maintained, but at this point we're not even sure which ones are > problem indicators at all and are just trying to provide the > clearest picture we can. If we come up with a detailed checklist and > some of the checks on that list can be automated in some way, that > seems like a good thing. However, the original data should be > publicly accessible so I don't see why it needs to be members of the > technical committee who write the software to collect that. > -- > Jeremy Stanley > Things like tracking project health I see like organizing a trash pickup at the local park, or off the side of a road: dirty, unglamorous work. The results can be immediately visible to not only those doing the work, but passers-by. Eliminating the human factor in deeply human-driven interactions can have ramifications immediately noticed. As distributed as things exist today, reducing the conversation to a few methods or people can damage intent, without humans talking to humans in a more direct manner. Best, Samuel Cassiba (scas) From iwienand at redhat.com Mon Sep 10 21:39:39 2018 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 11 Sep 2018 07:39:39 +1000 Subject: [openstack-dev] [Release-job-failures] Tag of openstack/python-neutronclient failed In-Reply-To: <20180910054949.GF16495@thor.bakeyournoodle.com> References: <20180910054949.GF16495@thor.bakeyournoodle.com> Message-ID: <2441be2b-4c7b-91a5-254b-f507d150ce1b@redhat.com> > On Mon, Sep 10, 2018 at 05:13:35AM +0000, zuul at openstack.org wrote: >> Build failed. >> >> - publish-openstack-releasenotes http://logs.openstack.org/c8/c89ca61fdcaf603a10750b289228b7f9a3597290/tag/publish-openstack-releasenotes/fbbd0fa/ : FAILURE in 4m 03s The line that is causing this is - Add OSC plugin support for the “Networking Service Function Chaining” ... see if you can find the unicode :) I did replicate it by mostly doing what the gate does; make a python2 vitualenv and install everything, then run ./env/bin/sphinx-build -a -E -W -d releasenotes/build/doctrees/ \ -b html releasenotes/source/ releasenotes/build/html/ In the gate, it doesn't use "tox -e releasenotes" ... which passes because it's python3 and everything is unicode already. I think this is a reno problem, and I've proposed https://review.openstack.org/601432 Use unicode for debug string Thanks -i From jungleboyj at gmail.com Mon Sep 10 22:40:58 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 10 Sep 2018 17:40:58 -0500 Subject: [openstack-dev] [ptg][cinder][manila] Team Dinner Plan ... Message-ID: <87d0b597-337e-cbd3-250f-5dc357093e60@gmail.com> All, We have landed on a decision for the Cinder/Manila Dinner plan. Here are the details: Location:  Casey's Bistro and Pub * 7:30 pm Tuesday after the Happy Hour * 7301 E 29th Ave, Denver, CO 80238 * Reservation for Amit See you all there! Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Mon Sep 10 22:53:53 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 10 Sep 2018 16:53:53 -0600 Subject: [openstack-dev] [election][tc] Opinion about 'PTL' tooling In-Reply-To: References: <7846-5b965200-7-68cc9800@21942213> <1536579024-sup-8338@lrrr.local> <0D98A746-F005-4D72-8F40-B900378D8D62@vexxhost.com> <20180910130748.zukcbjcdz2dhicow@yuggoth.org> Message-ID: I agree in that it's dependent on what metrics you think accurately showcase project health. Is it the number of contributions? The number of unique contributors? Diversity across participating organizations? Completion ratios of blueprints or committed fixes over bugs opened? I imagine different projects will have different opinions on this, but it would be interesting to know what those opinions are. I think if you can reasonably justify a metric as an accurate representation of health, then it makes sense to try and automate it. This jogged my memory and it might not be a valid metric of health, but I liked the idea after I heard another project doing it (I think it was swift). If you could recognize contributions (loosely defined here to be reviews, patches, bug triage) for an individual and if you noticed those contributions dropping off after a period of time, then you (as a maintainer or PTL of a project) could reach out to the individual directly. This assumes the reason isn't obvious and feels like it is more meant to track lost contributors. On Mon, Sep 10, 2018 at 3:27 PM Samuel Cassiba wrote: > On Mon, Sep 10, 2018 at 6:07 AM, Jeremy Stanley wrote: > > On 2018-09-10 06:38:11 -0600 (-0600), Mohammed Naser wrote: > >> I think something we should take into consideration is *what* you > >> consider health because the way we’ve gone about it over health > >> checks is not something that can become a toolkit because it was > >> more of question asking, etc > > [...] > > > > I was going to follow up with something similar. It's not as if the > > TC has a toolkit of any sort at this point to come up with the > > information we're assembling in the health tracker either. It's > > built up from interviewing PTLs, reading meeting logs, looking at > > the changes which merge to teams' various deliverable repositories, > > asking around as to whether they've missed important deadlines such > > as release milestones (depending on what release models they > > follow) or PTL nominations, looking over cycle goals to see how far > > along they are, and so on. Extremely time-consuming which is why > > it's taken us most of a release cycle and we still haven't finished > > a first pass. > > > > Assembling some of this information might be automatable if we make > > adjustments to how the data/processes on which it's based are > > maintained, but at this point we're not even sure which ones are > > problem indicators at all and are just trying to provide the > > clearest picture we can. If we come up with a detailed checklist and > > some of the checks on that list can be automated in some way, that > > seems like a good thing. However, the original data should be > > publicly accessible so I don't see why it needs to be members of the > > technical committee who write the software to collect that. > > -- > > Jeremy Stanley > > > > Things like tracking project health I see like organizing a trash > pickup at the local park, or off the side of a road: dirty, > unglamorous work. The results can be immediately visible to not only > those doing the work, but passers-by. Eliminating the human factor in > deeply human-driven interactions can have ramifications immediately > noticed. > > As distributed as things exist today, reducing the conversation to a > few methods or people can damage intent, without humans talking to > humans in a more direct manner. > > Best, > Samuel Cassiba (scas) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Sep 10 23:10:59 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 10 Sep 2018 17:10:59 -0600 Subject: [openstack-dev] [upgrade] request for pre-upgrade check for db purge Message-ID: I created a nova bug [1] to track a request that came up in the upgrades SIG room at the PTG today [2] and would like to see if there is any feedback from other operators/developers that weren't part of the discussion. The basic problem is that failing to archive/purge deleted records* from the database can make upgrades much slower during schema migrations. Anecdotes from the room mentioned that it can be literally impossible to complete upgrades for keystone and heat in certain scenarios if you don't purge the database first. The request was that a configurable limit gets added to each service which is checked as part of the service's pre-upgrade check routine [3] and warn if the number of records to purge is over that limit. For example, the nova-status upgrade check could warn if there are over 100000 deleted records total across all cells databases. Maybe cinder would have something similar for deleted volumes. Keystone could have something for revoked tokens. Another idea in the room was flagging on records over a certain age limit. For example, if there are deleted instances in nova that were deleted >1 year ago. How do people feel about this? It seems pretty straight-forward to me. If people are generally in favor of this, then the question is what would be sane defaults - or should we not assume a default and force operators to opt into this? * nova delete doesn't actually delete the record from the instances table, it flips a value to hide it - you have to archive/purge those records to get them out of the main table. [1] https://bugs.launchpad.net/nova/+bug/1791824 [2] https://etherpad.openstack.org/p/upgrade-sig-ptg-stein [3] https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html -- Thanks, Matt From emccormick at cirrusseven.com Tue Sep 11 00:15:36 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Mon, 10 Sep 2018 18:15:36 -0600 Subject: [openstack-dev] Fwd: [Openstack-operators] revamped ops meetup day 2 In-Reply-To: References: Message-ID: ---------- Forwarded message --------- From: Chris Morgan Date: Mon, Sep 10, 2018, 5:55 PM Subject: [Openstack-operators] revamped ops meetup day 2 To: OpenStack Operators , < openstaack-dev at lists.openstack.org> Hi All, We (ops meetups team) got several additional suggestions for ops meetups session, so we've attempted to revamp day 2 to fit them in, please see https://docs.google.com/spreadsheets/d/1EUSYMs3GfglnD8yfFaAXWhLe0F5y9hCUKqCYe0Vp1oA/edit#gid=981527336 Given the timing, we'll attempt to confirm the rest of the day starting at 9am over coffee. If you're moderating something tomorrow please check out the adjusted times. If something doesn't work for you we'll try and swap sessions to make it work. Cheers Chris, Erik, Sean -- Chris Morgan _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Tue Sep 11 01:42:21 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 11 Sep 2018 10:42:21 +0900 Subject: [openstack-dev] [Searchlight] Virtual PTG topic updating Message-ID: Hi Searchlight team, Because we don't have the team meeting this week so I'm planning to organize the virtual PTG on 20th Sep, 12:00 UTC. Please join me on the IRC channel (#openstack-searchlight) and find out an appropriate schedule. The purposes of this meeting are: - Talk to the team face-to-face virtually (using Zoom) :) - Discuss about how we will release in Stein-1 - Update project's progress. Please put your discussion topics on the Etherpad link [1] so we can fix the schedule accordingly. [1] https://etherpad.openstack.org/p/searchlight-stein-ptg Thanks, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Tue Sep 11 02:43:39 2018 From: james.slagle at gmail.com (James Slagle) Date: Mon, 10 Sep 2018 20:43:39 -0600 Subject: [openstack-dev] [TripleO] Plan management refactoring for Life cycle In-Reply-To: References: <03885d00-c63c-91ae-af99-c8d89ae4d7c4@redhat.com> Message-ID: On Mon, Sep 10, 2018 at 10:12 AM Jiri Tomasek wrote: > > Hi Mathieu, > > Thanks for bringing up the topic. There are several efforts currently in progress which should lead to solving the problems you're describing. We are working on introducing CLI commands which would perform the deployment configuration operations on deployment plan in Swift. This is a main step to finally reach CLI and GUI compatibility/interoperability. CLI will perform actions to configure deployment (roles, networks, environments selection, parameters setting etc.) by calling Mistral workflows which store the information in deployment plan in Swift. The result is that all the information which define the deployment are stored in central place - deployment plan in Swift and the deploy command is turned into simple 'openstack overcloud deploy'. Deployment plan then has plan-environment.yaml which has the list of environments used and customized parameter values, roles-data.yaml which carry roles definition and network-data.yaml which carry networks definition. The information stored in these files (and deployment plan in general) can then be treated as source of information about deployment. The deployment can then be easily exported and reliably replicated. > > Here is the document which we put together to identify missing pieces between GUI,CLI and Mistral TripleO API. We'll use this to discuss the topic at PTG this week and define work needed to be done to achieve the complete interoperability. [1] > > Also there is a pending patch from Steven Hardy which aims to remove CLI specific environments merging which should fix the problem with tracking of the environments used with CLI deployment. [2] > > [1] https://gist.github.com/jtomasek/8c2ae6118be0823784cdafebd9c0edac (Apologies for inconvenient format, I'll try to update this to better/editable format. Original doc: https://docs.google.com/spreadsheets/d/1ERfx2rnPq6VjkJ62JlA_E6jFuHt9vVl3j95dg6-mZBM/edit?usp=sharing) > [2] https://review.openstack.org/#/c/448209/ Related to this work, I'd like to see us store the plan in git instead of swift. I think this would reduce some of the complexity around plan management, and move us closer to a simpler undercloud architecture. It would be nice to see each change to the plan represented as new git commit, so we can even see the changes to the plan as roles, networks, services, etc, are selected. I also think git would provide a familiar experience for both developers and operators who are already accustomed to devops type workflows. I think we could make these changes without it impact the API too much or, hopefully, at all. -- -- James Slagle -- From johnsomor at gmail.com Tue Sep 11 03:48:53 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 10 Sep 2018 21:48:53 -0600 Subject: [openstack-dev] [release][octavia] Message-ID: Octavia and Release teams, I am adding Carlos Goncalves to the Octavia project release management liaison list for Stein. He will be assisting with regular stable branch release patches. Let me know if you have any questions or concerns, Michael From doug at doughellmann.com Tue Sep 11 03:55:16 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 10 Sep 2018 21:55:16 -0600 Subject: [openstack-dev] [release][octavia] In-Reply-To: References: Message-ID: <1536638099-sup-543@lrrr.local> Excerpts from Michael Johnson's message of 2018-09-10 21:48:53 -0600: > Octavia and Release teams, > > I am adding Carlos Goncalves to the Octavia project release management > liaison list for Stein. > He will be assisting with regular stable branch release patches. > > Let me know if you have any questions or concerns, > Michael > I see you've updated the wiki, too. Thanks for the heads-up! Doug From tony at bakeyournoodle.com Tue Sep 11 04:21:23 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 11 Sep 2018 14:21:23 +1000 Subject: [openstack-dev] [Release-job-failures] Tag of openstack/python-neutronclient failed In-Reply-To: <2441be2b-4c7b-91a5-254b-f507d150ce1b@redhat.com> References: <20180910054949.GF16495@thor.bakeyournoodle.com> <2441be2b-4c7b-91a5-254b-f507d150ce1b@redhat.com> Message-ID: <20180911042122.GH16495@thor.bakeyournoodle.com> On Tue, Sep 11, 2018 at 07:39:39AM +1000, Ian Wienand wrote: > > On Mon, Sep 10, 2018 at 05:13:35AM +0000, zuul at openstack.org wrote: > >> Build failed. > >> > >> - publish-openstack-releasenotes http://logs.openstack.org/c8/c89ca61fdcaf603a10750b289228b7f9a3597290/tag/publish-openstack-releasenotes/fbbd0fa/ : FAILURE in 4m 03s > > The line that is causing this is > > - Add OSC plugin support for the “Networking Service Function Chaining” ... > > see if you can find the unicode :) > > I did replicate it by mostly doing what the gate does; make a python2 > vitualenv and install everything, then run > > ./env/bin/sphinx-build -a -E -W -d releasenotes/build/doctrees/ \ > -b html releasenotes/source/ releasenotes/build/html/ > > In the gate, it doesn't use "tox -e releasenotes" ... which passes > because it's python3 and everything is unicode already. > > I think this is a reno problem, and I've proposed > > https://review.openstack.org/601432 Use unicode for debug string Thanks Ian! Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Tue Sep 11 04:29:15 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 11 Sep 2018 14:29:15 +1000 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][ceilometer] all requirement updates are currently blocked In-Reply-To: <20180905150309.cxstnk6i2sms6pj4@gentoo.org> References: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> <20180905150309.cxstnk6i2sms6pj4@gentoo.org> Message-ID: <20180911042914.GI16495@thor.bakeyournoodle.com> On Wed, Sep 05, 2018 at 10:03:09AM -0500, Matthew Thode wrote: > The requirements team has gone ahead and made a aweful hack to get gate > unwedged. The commit message is a very good summary of our reasoning > why it has to be this way for now. My comment explains our plan going > forward (there will be a revert prepared as soon as this merges for > instance). > > step 1. merge this This == https://review.openstack.org/#/c/599277/ ; done and similar versions on stable branches. > step 2. look into and possibly fix our tooling (why was the gitref > addition not rejected by gate) Not done yet > step 3. fix networking-odl (release ceilometer) Done. See: * https://review.openstack.org/#/c/601487/ ; and * https://review.openstack.org/#/c/601488/ > step 4. unmerge this Done and marked as Depending on the reviews above. https://review.openstack.org/#/c/600123/ So I think we have the required reviews lined up to fix master, but they need votes from zuul and core teams. We can handle stable later ;P Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Tue Sep 11 04:34:08 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 11 Sep 2018 14:34:08 +1000 Subject: [openstack-dev] [Release-job-failures] Tag of openstack/python-neutronclient failed In-Reply-To: <1536580611-sup-6050@lrrr.local> References: <20180910054949.GF16495@thor.bakeyournoodle.com> <1536580611-sup-6050@lrrr.local> Message-ID: <20180911043407.GJ16495@thor.bakeyournoodle.com> On Mon, Sep 10, 2018 at 05:58:21AM -0600, Doug Hellmann wrote: > The python3 version of the job worked. I think both jobs ran because the > repo is in the middle of its zuul settings transition and the cleanup > patch hasn't merged yet. Since one of them worked, I think the published > output should be OK. Ahh okay. Thanks Doug Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From melwittt at gmail.com Tue Sep 11 05:42:06 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 10 Sep 2018 23:42:06 -0600 Subject: [openstack-dev] [nova][placement] openstack/placement governance switch plan Message-ID: <9109800c-423a-35b5-89bf-ffd6dea39eec@gmail.com> Howdy everyone, Those of us involved in the placement extraction process sat down together today to discuss the plan for openstack/placement governance. We agreed on a set of criteria which we will use to determine when we will switch the openstack/placement governance from the compute project to its own project. I'd like to update everyone with a summary of the plan we agreed upon. Attendees: Balázs Gibizer, Chris Dent, Dan Smith, Ed Leafe, Eric Fried, Jay Pipes, Matt Riedemann, Melanie Witt, Mohammed Naser, Sylvain Bauza The targets we have set are: - Devstack/grenade job that executes an upgrade which deploys the extracted placement code - Support in one of the deployment tools to deploy extracted placement code (Tripleo) - An upgrade job using any deployment tool (this might have to be a manual test by a deployment tool team member if none of the deployment tools have an upgrade job) - Implementation of nested vGPU resource support in the xenapi and libvirt drivers - Functional test with vGPU resources that verifies reshaping of flat vGPU resources to nested vGPU resources and successful scheduling to the same compute host after reshaping - Lab test with real hardware of the same ^ (xenapi and libvirt) Once we have achieved these targets, we will switch openstack/placement governance from the compute project to its own project. The placement-core team will flatten nova-core into individual members of placement-core so it may evolve, the PTL of openstack/placement will be the same as the openstack/nova PTL for the remainder of the release cycle, and the electorate for the openstack/placement PTL election for the next release cycle will be determined by the commit history of the extracted placement code repo, probably by date, to include contributors from the previous two release cycles, as per usual. Thank you to Mohammed for facilitating the discussion, we really appreciate it. Cheers, -melanie From jazeltq at gmail.com Tue Sep 11 07:14:14 2018 From: jazeltq at gmail.com (Jaze Lee) Date: Tue, 11 Sep 2018 15:14:14 +0800 Subject: [openstack-dev] [nova][cinder] about unified limits In-Reply-To: References: Message-ID: I recommend lijie at unitedstack.com to join in to help to work forward. May be first we should the keystone unified limits api really ok or something else ? Lance Bragstad 于2018年9月8日周六 上午2:35写道: > > That would be great! I can break down the work a little bit to help describe where we are at with different parts of the initiative. Hopefully it will be useful for your colleagues in case they haven't been closely following the effort. > > # keystone > > Based on the initial note in this thread, I'm sure you're aware of keystone's status with respect to unified limits. But to recap, the initial implementation landed in Queens and targeted flat enforcement [0]. During the Rocky PTG we sat down with other services and a few operators to explain the current status in keystone and if either developers or operators had feedback on the API specifically. Notes were captured in etherpad [1]. We spent the Rocky cycle fixing usability issues with the API [2] and implementing support for a hierarchical enforcement model [3]. > > At this point keystone is ready for services to start consuming the unified limits work. The unified limits API is still marked as stable and it will likely stay that way until we have at least one project using unified limits. We can use that as an opportunity to do a final flush of any changes that need to be made to the API before fully supporting it. The keystone team expects that to be a quick transition, as we don't want to keep the API hanging in an experimental state. It's really just a safe guard to make sure we have the opportunity to use it in another service before fully committing to the API. Ultimately, we don't want to prematurely mark the API as supported when other services aren't even using it yet, and then realize it has issues that could have been fixed prior to the adoption phase. > > # oslo.limit > > In parallel with the keystone work, we created a new library to aid services in consuming limits. Currently, the sole purpose of oslo.limit is to abstract project and project hierarchy information away from the service, so that services don't have to reimplement client code to understand project trees, which could arguably become complex and lead to inconsistencies in u-x across services. > > Ideally, a service should be able to pass some relatively basic information to oslo.limit and expect an answer on whether or not usage for that claim is valid. For example, here is a project ID, resource name, and resource quantity, tell me if this project is over it's associated limit or default limit. > > We're currently working on implementing the enforcement bits of oslo.limit, which requires making API calls to keystone in order to retrieve the deployed enforcement model, limit information, and project hierarchies. Then it needs to reason about those things and calculate usage from the service in order to determine if the request claim is valid or not. There are patches up for this work, and reviews are always welcome [4]. > > Note that we haven't released oslo.limit yet, but once the basic enforcement described above is implemented we will. Then services can officially pull it into their code as a dependency and we can work out remaining bugs in both keystone and oslo.limit. Once we're confident in both the API and the library, we'll bump oslo.limit to version 1.0 at the same time we graduate the unified limits API from "experimental" to "supported". Note that oslo libraries <1.0 are considered experimental, which fits nicely with the unified limit API being experimental as we shake out usability issues in both pieces of software. > > # services > > Finally, we'll be in a position to start integrating oslo.limit into services. I imagine this to be a coordinated effort between keystone, oslo, and service developers. I do have a patch up that adds a conceptual overview for developers consuming oslo.limit [5], which renders into [6]. > > To be honest, this is going to be a very large piece of work and it's going to require a lot of communication. In my opinion, I think we can use the first couple iterations to generate some well-written usage documentation. Any questions coming from developers in this phase should probably be answered in documentation if we want to enable folks to pick this up and run with it. Otherwise, I could see the handful of people pushing the effort becoming a bottle neck in adoption. > > Hopefully this helps paint the landscape of where things are currently with respect to each piece. As always, let me know if you have any additional questions. If people want to discuss online, you can find me, and other contributors familiar with this topic, in #openstack-keystone or #openstack-dev on IRC (nic: lbragstad). > > [0] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html > [1] https://etherpad.openstack.org/p/unified-limits-rocky-ptg > [2] https://tinyurl.com/y6ucarwm > [3] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/strict-two-level-enforcement-model.html > [4] https://review.openstack.org/#/q/project:openstack/oslo.limit+status:open > [5] https://review.openstack.org/#/c/600265/ > [6] http://logs.openstack.org/65/600265/3/check/openstack-tox-docs/a6bcf38/html/user/usage.html > > On Thu, Sep 6, 2018 at 8:56 PM Jaze Lee wrote: >> >> Lance Bragstad 于2018年9月6日周四 下午10:01写道: >> > >> > I wish there was a better answer for this question, but currently there are only a handful of us working on the initiative. If you, or someone you know, is interested in getting involved, I'll happily help onboard people. >> >> Well,I can recommend some my colleges to work on this. I wish in S, >> all service can use unified limits to do quota job. >> >> > >> > On Wed, Sep 5, 2018 at 8:52 PM Jaze Lee wrote: >> >> >> >> On Stein only one service? >> >> Is there some methods to move this more fast? >> >> Lance Bragstad 于2018年9月5日周三 下午9:29写道: >> >> > >> >> > Not yet. Keystone worked through a bunch of usability improvements with the unified limits API last release and created the oslo.limit library. We have a patch or two left to land in oslo.limit before projects can really start using unified limits [0]. >> >> > >> >> > We're hoping to get this working with at least one resource in another service (nova, cinder, etc...) in Stein. >> >> > >> >> > [0] https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init >> >> > >> >> > On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee wrote: >> >> >> >> >> >> Hello, >> >> >> Does nova and cinder use keystone's unified limits api to do quota job? >> >> >> If not, is there a plan to do this? >> >> >> Thanks a lot. >> >> >> >> >> >> -- >> >> >> 谦谦君子 >> >> >> >> >> >> __________________________________________________________________________ >> >> >> OpenStack Development Mailing List (not for usage questions) >> >> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > >> >> > __________________________________________________________________________ >> >> > OpenStack Development Mailing List (not for usage questions) >> >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> >> >> -- >> >> 谦谦君子 >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -- >> 谦谦君子 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- 谦谦君子 From dougal at redhat.com Tue Sep 11 09:32:52 2018 From: dougal at redhat.com (Dougal Matthews) Date: Tue, 11 Sep 2018 10:32:52 +0100 Subject: [openstack-dev] [mistral] [PTL] PTL on Vacation 17th - 28th September Message-ID: Hey all, I'll be on vacation from 17th to the 28th of September. I don't anticipate anything coming up but Renat Akhmerov is standing in as PTL while I'm out. Cheers, Dougal -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Tue Sep 11 10:08:19 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 11 Sep 2018 12:08:19 +0200 Subject: [openstack-dev] [TripleO] Plan management refactoring for Life cycle In-Reply-To: References: <03885d00-c63c-91ae-af99-c8d89ae4d7c4@redhat.com> Message-ID: On 9/11/18 4:43 AM, James Slagle wrote: > On Mon, Sep 10, 2018 at 10:12 AM Jiri Tomasek wrote: >> >> Hi Mathieu, >> >> Thanks for bringing up the topic. There are several efforts currently in progress which should lead to solving the problems you're describing. We are working on introducing CLI commands which would perform the deployment configuration operations on deployment plan in Swift. This is a main step to finally reach CLI and GUI compatibility/interoperability. CLI will perform actions to configure deployment (roles, networks, environments selection, parameters setting etc.) by calling Mistral workflows which store the information in deployment plan in Swift. The result is that all the information which define the deployment are stored in central place - deployment plan in Swift and the deploy command is turned into simple 'openstack overcloud deploy'. Deployment plan then has plan-environment.yaml which has the list of environments used and customized parameter values, roles-data.yaml which carry roles definition and network-data.yaml which carry networks definition. The information stored in these files (and deployment plan in general) can then be treated as source of information about deployment. The deployment can then be easily exported and reliably replicated. >> >> Here is the document which we put together to identify missing pieces between GUI,CLI and Mistral TripleO API. We'll use this to discuss the topic at PTG this week and define work needed to be done to achieve the complete interoperability. [1] >> >> Also there is a pending patch from Steven Hardy which aims to remove CLI specific environments merging which should fix the problem with tracking of the environments used with CLI deployment. [2] >> >> [1] https://gist.github.com/jtomasek/8c2ae6118be0823784cdafebd9c0edac (Apologies for inconvenient format, I'll try to update this to better/editable format. Original doc: https://docs.google.com/spreadsheets/d/1ERfx2rnPq6VjkJ62JlA_E6jFuHt9vVl3j95dg6-mZBM/edit?usp=sharing) >> [2] https://review.openstack.org/#/c/448209/ > > > Related to this work, I'd like to see us store the plan in git instead > of swift. I think this would reduce some of the complexity around plan > management, and move us closer to a simpler undercloud architecture. > It would be nice to see each change to the plan represented as new git > commit, so we can even see the changes to the plan as roles, networks, > services, etc, are selected. > > I also think git would provide a familiar experience for both > developers and operators who are already accustomed to devops type > workflows. I think we could make these changes without it impact the > API too much or, hopefully, at all. +42! See also the related RFE (drafted only) [0] [0] https://bugs.launchpad.net/tripleo/+bug/1782139 > -- Best regards, Bogdan Dobrelya, Irc #bogdando From tony at bakeyournoodle.com Tue Sep 11 10:22:59 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 11 Sep 2018 20:22:59 +1000 Subject: [openstack-dev] [Release-job-failures] Tag of openstack/kuryr-kubernetes failed In-Reply-To: References: Message-ID: <20180911102259.GA7412@thor.bakeyournoodle.com> On Tue, Sep 11, 2018 at 10:20:52AM +0000, zuul at openstack.org wrote: > Build failed. > > - publish-openstack-releasenotes http://logs.openstack.org/6c/6ce2f5edd0b3dbb2c7edebca37ccc8219675e189/tag/publish-openstack-releasenotes/85bfc1a/ : FAILURE in 4m 45s > - publish-openstack-releasenotes-python3 http://logs.openstack.org/6c/6ce2f5edd0b3dbb2c7edebca37ccc8219675e189/tag/publish-openstack-releasenotes-python3/abd87f9/ : SUCCESS in 4m 17s This looks like the same failure from yesterday which has been fix in reno but not yet released. As Doug points out the py3 job passed so the content is live ;P Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From dmellado at redhat.com Tue Sep 11 10:28:13 2018 From: dmellado at redhat.com (Daniel Mellado) Date: Tue, 11 Sep 2018 12:28:13 +0200 Subject: [openstack-dev] [kuryr][fuxi] Retiring fuxi* projects Message-ID: Hi all, After having discussed with the project maintainers, we'll be no longer supporting fuxi, fuxi-golang nor fuxi-kubernetes, and I'll start the process of retiring this. We're driving this as a part of the py3 goal and as contributors have no longer time available to work on these projects. Thanks for your help so far! Best! Daniel -------------- next part -------------- A non-text attachment was scrubbed... Name: 0x13DDF774E05F5B85.asc Type: application/pgp-keys Size: 2208 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From mardim at intracom-telecom.com Tue Sep 11 11:12:27 2018 From: mardim at intracom-telecom.com (Markou Dimitris) Date: Tue, 11 Sep 2018 14:12:27 +0300 Subject: [openstack-dev] ODL Fluorine SR0 debian package Message-ID: Hello community, The new ODL fluorine SR0 debian package is uploaded to ODL team ppa : https://launchpad.net/~odl-team/+archive/ubuntu/fluorine Regards, Dimitrios Markou Software Engineer SDN/NFV Team ______________________________________ Intracom Telecom 19.7 km Markopoulou Ave., Peania, GR 19002 t: +30 2106677408 f: +30 2106671887 mardim at intracom-telecom.com www.intracom-telecom.com JOIN US Mobile World Congress 26 Feb-01 Mar Barcelona, Spain Mobile World Congress Shanghai 27-29 June Shanghai, China Mobile World Congress Americas 12-14 September Los Angeles, USA Gitex Technology Week 14-18 October Dubai, UAE FutureCom 15-18 October Sao Paolo, Brazil AfricaCom 13-15 November Cape Town, S. Africa -------------- next part -------------- An HTML attachment was scrubbed... URL: From liu.xuefeng1 at zte.com.cn Tue Sep 11 11:29:09 2018 From: liu.xuefeng1 at zte.com.cn (liu.xuefeng1 at zte.com.cn) Date: Tue, 11 Sep 2018 19:29:09 +0800 (CST) Subject: [openstack-dev] =?utf-8?b?562U5aSNOiAgW3Nlbmxpbl0gTm9taW5hdGlv?= =?utf-8?q?ns_to_Senlin_Core_Team?= In-Reply-To: References: CAN81NT4G-8OUwR8miOLsYOb4=fxKsWxUTDEnucbt3=4XQR=CPQ@mail.gmail.com Message-ID: <201809111929090515010@zte.com.cn> +1 for both 原始邮件 发件人:DucTruong 收件人:openstack-dev at lists.openstack.org 日 期 :2018年09月11日 01:00 主 题 :[openstack-dev] [senlin] Nominations to Senlin Core Team Hi Senlin Core Team, I would like to nominate 2 new core reviewers for Senlin: [1] Jude Cross (jucross at blizzard.com) [2] Erik Olof Gunnar Andersson (eandersson at blizzard.com) Jude has been doing a number of reviews and contributed some important patches to Senlin during the Rocky cycle that resolved locking problems. Erik has the most number of reviews in Rocky and has contributed high quality code reviews for some time. [1] http://stackalytics.com/?module=senlin-group&metric=marks&release=rocky&user_id=jucross at blizzard.com [2] http://stackalytics.com/?module=senlin-group&metric=marks&user_id=eandersson&release=rocky Voting is open for 7 days. Please reply with your +1 vote in favor or -1 as a veto vote. Regards, Duc (dtruong) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Sep 11 11:47:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 11 Sep 2018 05:47:35 -0600 Subject: [openstack-dev] [nova][placement] openstack/placement governance switch plan In-Reply-To: <9109800c-423a-35b5-89bf-ffd6dea39eec@gmail.com> References: <9109800c-423a-35b5-89bf-ffd6dea39eec@gmail.com> Message-ID: <1536666295-sup-2048@lrrr.local> Excerpts from melanie witt's message of 2018-09-10 23:42:06 -0600: > Howdy everyone, > > Those of us involved in the placement extraction process sat down > together today to discuss the plan for openstack/placement governance. > We agreed on a set of criteria which we will use to determine when we > will switch the openstack/placement governance from the compute project > to its own project. I'd like to update everyone with a summary of the > plan we agreed upon. > > Attendees: Balázs Gibizer, Chris Dent, Dan Smith, Ed Leafe, Eric Fried, > Jay Pipes, Matt Riedemann, Melanie Witt, Mohammed Naser, Sylvain Bauza > > The targets we have set are: > > - Devstack/grenade job that executes an upgrade which deploys the > extracted placement code > - Support in one of the deployment tools to deploy extracted placement > code (Tripleo) > - An upgrade job using any deployment tool (this might have to be a > manual test by a deployment tool team member if none of the deployment > tools have an upgrade job) > - Implementation of nested vGPU resource support in the xenapi and > libvirt drivers > - Functional test with vGPU resources that verifies reshaping of flat > vGPU resources to nested vGPU resources and successful scheduling to the > same compute host after reshaping > - Lab test with real hardware of the same ^ (xenapi and libvirt) > > Once we have achieved these targets, we will switch openstack/placement > governance from the compute project to its own project. The > placement-core team will flatten nova-core into individual members of > placement-core so it may evolve, the PTL of openstack/placement will be > the same as the openstack/nova PTL for the remainder of the release > cycle, and the electorate for the openstack/placement PTL election for > the next release cycle will be determined by the commit history of the > extracted placement code repo, probably by date, to include contributors > from the previous two release cycles, as per usual. > > Thank you to Mohammed for facilitating the discussion, we really > appreciate it. > > Cheers, > -melanie > This is good news. Thank you all for taking the time to sit down and put this plan together. Doug From doug at doughellmann.com Tue Sep 11 12:26:36 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 11 Sep 2018 06:26:36 -0600 Subject: [openstack-dev] [goals][python3][trove] starting zuul migration for trove Message-ID: <1536668716-sup-5812@lrrr.local> Here are the zuul migration patches for the trove team's repositories. Please prioritize these reviews. +----------------------------------------------+--------------------------------+---------------+ | Subject | Repo | Branch | +----------------------------------------------+--------------------------------+---------------+ | remove job settings for trove repositories | openstack-infra/project-config | master | | import zuul job settings from project-config | openstack/python-troveclient | master | | switch documentation job to new PTI | openstack/python-troveclient | master | | add python 3.6 unit test job | openstack/python-troveclient | master | | import zuul job settings from project-config | openstack/python-troveclient | stable/ocata | | import zuul job settings from project-config | openstack/python-troveclient | stable/pike | | import zuul job settings from project-config | openstack/python-troveclient | stable/queens | | import zuul job settings from project-config | openstack/python-troveclient | stable/rocky | | fix tox python3 overrides | openstack/trove | master | | update pylint to 1.9.2 | openstack/trove | master | | make tox -e pylint only run pylint | openstack/trove | master | | import zuul job settings from project-config | openstack/trove | master | | switch documentation job to new PTI | openstack/trove | master | | add python 3.6 unit test job | openstack/trove | master | | import zuul job settings from project-config | openstack/trove | stable/ocata | | import zuul job settings from project-config | openstack/trove | stable/pike | | import zuul job settings from project-config | openstack/trove | stable/queens | | import zuul job settings from project-config | openstack/trove | stable/rocky | | import zuul job settings from project-config | openstack/trove-dashboard | master | | switch documentation job to new PTI | openstack/trove-dashboard | master | | add python 3.6 unit test job | openstack/trove-dashboard | master | | import zuul job settings from project-config | openstack/trove-dashboard | stable/ocata | | import zuul job settings from project-config | openstack/trove-dashboard | stable/pike | | import zuul job settings from project-config | openstack/trove-dashboard | stable/queens | | import zuul job settings from project-config | openstack/trove-dashboard | stable/rocky | | import zuul job settings from project-config | openstack/trove-specs | master | | import zuul job settings from project-config | openstack/trove-tempest-plugin | master | +----------------------------------------------+--------------------------------+---------------+ From lbragstad at gmail.com Tue Sep 11 13:10:05 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 11 Sep 2018 07:10:05 -0600 Subject: [openstack-dev] [nova][cinder] about unified limits In-Reply-To: References: Message-ID: Extra eyes on the API would be appreciated. We're also close to the point where we can start incorporating oslo.limit into services, so preparing those changes might be useful, too. One of the outcomes from yesterday's session was that Jay and Mel (from nova) were going to work out some examples we could use to finish up the enforcement code in oslo.limit. Helping out with that or picking it up would certainly help move the ball forward in nova. On Tue, Sep 11, 2018 at 1:15 AM Jaze Lee wrote: > I recommend lijie at unitedstack.com to join in to help to work forward. > May be first we should the keystone unified limits api really ok or > something else ? > > Lance Bragstad 于2018年9月8日周六 上午2:35写道: > > > > That would be great! I can break down the work a little bit to help > describe where we are at with different parts of the initiative. Hopefully > it will be useful for your colleagues in case they haven't been closely > following the effort. > > > > # keystone > > > > Based on the initial note in this thread, I'm sure you're aware of > keystone's status with respect to unified limits. But to recap, the initial > implementation landed in Queens and targeted flat enforcement [0]. During > the Rocky PTG we sat down with other services and a few operators to > explain the current status in keystone and if either developers or > operators had feedback on the API specifically. Notes were captured in > etherpad [1]. We spent the Rocky cycle fixing usability issues with the API > [2] and implementing support for a hierarchical enforcement model [3]. > > > > At this point keystone is ready for services to start consuming the > unified limits work. The unified limits API is still marked as stable and > it will likely stay that way until we have at least one project using > unified limits. We can use that as an opportunity to do a final flush of > any changes that need to be made to the API before fully supporting it. The > keystone team expects that to be a quick transition, as we don't want to > keep the API hanging in an experimental state. It's really just a safe > guard to make sure we have the opportunity to use it in another service > before fully committing to the API. Ultimately, we don't want to > prematurely mark the API as supported when other services aren't even using > it yet, and then realize it has issues that could have been fixed prior to > the adoption phase. > > > > # oslo.limit > > > > In parallel with the keystone work, we created a new library to aid > services in consuming limits. Currently, the sole purpose of oslo.limit is > to abstract project and project hierarchy information away from the > service, so that services don't have to reimplement client code to > understand project trees, which could arguably become complex and lead to > inconsistencies in u-x across services. > > > > Ideally, a service should be able to pass some relatively basic > information to oslo.limit and expect an answer on whether or not usage for > that claim is valid. For example, here is a project ID, resource name, and > resource quantity, tell me if this project is over it's associated limit or > default limit. > > > > We're currently working on implementing the enforcement bits of > oslo.limit, which requires making API calls to keystone in order to > retrieve the deployed enforcement model, limit information, and project > hierarchies. Then it needs to reason about those things and calculate usage > from the service in order to determine if the request claim is valid or > not. There are patches up for this work, and reviews are always welcome [4]. > > > > Note that we haven't released oslo.limit yet, but once the basic > enforcement described above is implemented we will. Then services can > officially pull it into their code as a dependency and we can work out > remaining bugs in both keystone and oslo.limit. Once we're confident in > both the API and the library, we'll bump oslo.limit to version 1.0 at the > same time we graduate the unified limits API from "experimental" to > "supported". Note that oslo libraries <1.0 are considered experimental, > which fits nicely with the unified limit API being experimental as we shake > out usability issues in both pieces of software. > > > > # services > > > > Finally, we'll be in a position to start integrating oslo.limit into > services. I imagine this to be a coordinated effort between keystone, oslo, > and service developers. I do have a patch up that adds a conceptual > overview for developers consuming oslo.limit [5], which renders into [6]. > > > > To be honest, this is going to be a very large piece of work and it's > going to require a lot of communication. In my opinion, I think we can use > the first couple iterations to generate some well-written usage > documentation. Any questions coming from developers in this phase should > probably be answered in documentation if we want to enable folks to pick > this up and run with it. Otherwise, I could see the handful of people > pushing the effort becoming a bottle neck in adoption. > > > > Hopefully this helps paint the landscape of where things are currently > with respect to each piece. As always, let me know if you have any > additional questions. If people want to discuss online, you can find me, > and other contributors familiar with this topic, in #openstack-keystone or > #openstack-dev on IRC (nic: lbragstad). > > > > [0] > http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html > > [1] https://etherpad.openstack.org/p/unified-limits-rocky-ptg > > [2] https://tinyurl.com/y6ucarwm > > [3] > http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/strict-two-level-enforcement-model.html > > [4] > https://review.openstack.org/#/q/project:openstack/oslo.limit+status:open > > [5] https://review.openstack.org/#/c/600265/ > > [6] > http://logs.openstack.org/65/600265/3/check/openstack-tox-docs/a6bcf38/html/user/usage.html > > > > On Thu, Sep 6, 2018 at 8:56 PM Jaze Lee wrote: > >> > >> Lance Bragstad 于2018年9月6日周四 下午10:01写道: > >> > > >> > I wish there was a better answer for this question, but currently > there are only a handful of us working on the initiative. If you, or > someone you know, is interested in getting involved, I'll happily help > onboard people. > >> > >> Well,I can recommend some my colleges to work on this. I wish in S, > >> all service can use unified limits to do quota job. > >> > >> > > >> > On Wed, Sep 5, 2018 at 8:52 PM Jaze Lee wrote: > >> >> > >> >> On Stein only one service? > >> >> Is there some methods to move this more fast? > >> >> Lance Bragstad 于2018年9月5日周三 下午9:29写道: > >> >> > > >> >> > Not yet. Keystone worked through a bunch of usability improvements > with the unified limits API last release and created the oslo.limit > library. We have a patch or two left to land in oslo.limit before projects > can really start using unified limits [0]. > >> >> > > >> >> > We're hoping to get this working with at least one resource in > another service (nova, cinder, etc...) in Stein. > >> >> > > >> >> > [0] > https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init > >> >> > > >> >> > On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee wrote: > >> >> >> > >> >> >> Hello, > >> >> >> Does nova and cinder use keystone's unified limits api to do > quota job? > >> >> >> If not, is there a plan to do this? > >> >> >> Thanks a lot. > >> >> >> > >> >> >> -- > >> >> >> 谦谦君子 > >> >> >> > >> >> >> > __________________________________________________________________________ > >> >> >> OpenStack Development Mailing List (not for usage questions) > >> >> >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> >> > > >> >> > > __________________________________________________________________________ > >> >> > OpenStack Development Mailing List (not for usage questions) > >> >> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> >> > >> >> > >> >> > >> >> -- > >> >> 谦谦君子 > >> >> > >> >> > __________________________________________________________________________ > >> >> OpenStack Development Mailing List (not for usage questions) > >> >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >> > > __________________________________________________________________________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> > >> -- > >> 谦谦君子 > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > 谦谦君子 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbultel at redhat.com Tue Sep 11 13:15:53 2018 From: mbultel at redhat.com (mathieu bultel) Date: Tue, 11 Sep 2018 15:15:53 +0200 Subject: [openstack-dev] [TripleO] Plan management refactoring for Life cycle In-Reply-To: References: <03885d00-c63c-91ae-af99-c8d89ae4d7c4@redhat.com> Message-ID: <5148b7ce-b6a1-dd1f-f9ee-422e45c6010a@redhat.com> Hi, On 09/11/2018 12:08 PM, Bogdan Dobrelya wrote: > On 9/11/18 4:43 AM, James Slagle wrote: >> On Mon, Sep 10, 2018 at 10:12 AM Jiri Tomasek >> wrote: >>> >>> Hi Mathieu, >>> >>> Thanks for bringing up the topic. There are several efforts >>> currently in progress which should lead to solving the problems >>> you're describing. We are working on introducing CLI commands which >>> would perform the deployment configuration operations on deployment >>> plan in Swift. This is a main step to finally reach CLI and GUI >>> compatibility/interoperability. CLI will perform actions to >>> configure deployment (roles, networks, environments selection, >>> parameters setting etc.) by calling Mistral workflows which store >>> the information in deployment plan in Swift. The result is that all >>> the information which define the deployment are stored in central >>> place - deployment plan in Swift and the deploy command is turned >>> into simple 'openstack overcloud deploy'. Deployment plan >>> then has plan-environment.yaml which has the list of environments >>> used and customized parameter values, roles-data.yaml which carry >>> roles definition and network-data.yaml which carry networks >>> definition. The information stored in these files (and deployment >>> plan in general) can then be treated as source of information about >>> deployment. The deployment can then be easily exported and reliably >>> replicated. >>> >>> Here is the document which we put together to identify missing >>> pieces between GUI,CLI and Mistral TripleO API. We'll use this to >>> discuss the topic at PTG this week and define work needed to be done >>> to achieve the complete interoperability. [1] >>> >>> Also there is a pending patch from Steven Hardy which aims to remove >>> CLI specific environments merging which should fix the problem with >>> tracking of the environments used with CLI deployment. [2] >>> Thank you Jirka to point me to this work. I will be happy to help in those efforts, at least for the lice cycle part (Update/Upgrade/Scale) of this big feature. I can't attend to the PTG this week unfortunately, but if you can point me out the etherpad with the resume of the session it would be very nice. I think the review from Steven aim to solve more or less the same issue than my current review. I will go through it in details, and AFAIS the last changes are old. >>> [1] >>> https://gist.github.com/jtomasek/8c2ae6118be0823784cdafebd9c0edac >>> (Apologies for inconvenient format, I'll try to update this to >>> better/editable format. Original doc: >>> https://docs.google.com/spreadsheets/d/1ERfx2rnPq6VjkJ62JlA_E6jFuHt9vVl3j95dg6-mZBM/edit?usp=sharing) >>> [2] https://review.openstack.org/#/c/448209/ >> >> >> Related to this work, I'd like to see us store the plan in git instead >> of swift. I think this would reduce some of the complexity around plan >> management, and move us closer to a simpler undercloud architecture. >> It would be nice to see each change to the plan represented as new git >> commit, so we can even see the changes to the plan as roles, networks, >> services, etc, are selected. >> >> I also think git would provide a familiar experience for both >> developers and operators who are already accustomed to devops type >> workflows. I think we could make these changes without it impact the >> API too much or, hopefully, at all. > > +42! > See also the related RFE (drafted only) [0] > > [0] https://bugs.launchpad.net/tripleo/+bug/1782139 Thanks James, Same here +1 (or 42 :)) > >> > > From strigazi at gmail.com Tue Sep 11 14:16:00 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Tue, 11 Sep 2018 16:16:00 +0200 Subject: [openstack-dev] [magnum] Upcoming meeting 2018-09-11 Tuesday UTC 2100 Message-ID: Hello team, This is a reminder for the upcoming magnum meeting [0]. For convenience you can import this from here [1] or view it in html here [2]. Cheers, Spyros [0] https://wiki.openstack.org/wiki/Meetings/Containers#Weekly_Magnum_Team_Meeting [1] https://calendar.google.com/calendar/ical/dl8ufmpm2ahi084d038o7rgoek%40group.calendar.google.com/public/basic.ics [2] https://calendar.google.com/calendar/embed?src=dl8ufmpm2ahi084d038o7rgoek%40group.calendar.google.com&ctz=Europe/Zurich -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue Sep 11 14:38:04 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 11 Sep 2018 08:38:04 -0600 Subject: [openstack-dev] [cyborg]Day 2 Arrangement Reminder Message-ID: Hi Team, Today the Cyborg session will concentrate on two items that were not covered yesterday: neutron-cyborg interaction and general device mgmt. Since I will be mostly at the Public Cloud WG session, Sundar will help to lead the discussion, and our Stein PTL Li Liu will host the online ZOOM conference. You are also welcomed to propose new topics. Our team photo is schedule 11:30 so let's gather at the lobby front around 11:25 :) All the information could be found at https://etherpad.openstack.org/p/cyborg-ptg-stein . -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Tue Sep 11 15:01:49 2018 From: dms at danplanet.com (Dan Smith) Date: Tue, 11 Sep 2018 08:01:49 -0700 Subject: [openstack-dev] [upgrade] request for pre-upgrade check for db purge In-Reply-To: (Matt Riedemann's message of "Mon, 10 Sep 2018 17:10:59 -0600") References: Message-ID: > How do people feel about this? It seems pretty straight-forward to > me. If people are generally in favor of this, then the question is > what would be sane defaults - or should we not assume a default and > force operators to opt into this? I dunno, adding something to nova.conf that is only used for nova-status like that seems kinda weird to me. It's just a warning/informational sort of thing so it just doesn't seem worth the complication to me. Moving it to an age thing set at one year seems okay, and better than making the absolute limit more configurable. Any reason why this wouldn't just be a command line flag to status if people want it to behave in a specific way from a specific tool? --Dan From fungi at yuggoth.org Tue Sep 11 16:53:05 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 11 Sep 2018 16:53:05 +0000 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <87bmamayd8.fsf@meyer.lemoncheese.net> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <5f2d90fc-b96f-2284-3b86-fb6e2c6fbcc1@inaugust.com> <87bmamayd8.fsf@meyer.lemoncheese.net> Message-ID: <20180911165305.btj6xzokdt6v4xsq@yuggoth.org> On 2018-08-01 08:40:51 -0700 (-0700), James E. Blair wrote: > Monty Taylor writes: > > On 08/01/2018 12:45 AM, Ian Wienand wrote: > > > I'd suggest to start, people with an interest in a channel can > > > request +r from an IRC admin in #openstack-infra and we track > > > it at [2] > > > > To mitigate the pain caused by +r - we have created a channel > > called #openstack-unregistered and have configured the channels > > with the +r flag to forward people to it. [...] > It turns out this was a very popular option, so we've gone ahead > and performed this for all channels registered with accessbot. [...] We rolled this back 5 days ago for all channels and haven't had any new reports of in-channel spamming yet. Hopefully this means the recent flood is behind us now but definitely let us know (replying on this thread or in #openstack-infra on Freenode) if you see any signs of resurgence. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From aschultz at redhat.com Tue Sep 11 16:53:55 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 11 Sep 2018 10:53:55 -0600 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do In-Reply-To: References: Message-ID: Thanks everyone for coming and chatting. From the meeting we've had a few items where we can collaborate together. Here are some specific bullet points: - TripleO folks should feel free to propose some minor structural changes if they make the integration easier. TripleO is currently investigating what it would look like to pull the keystone ansible parts out of tripleo-heat-templates and put it into ansible-role-tripleo-keystone. It would be beneficial to use this role as an example for how the os_keystone role can be consumed. - The openstack-ansible-tests has some good examples of ansible-lint rules that can be used to improve quality - Tags could be used to limit the scope of OpenStack Ansible roles, but it sounds like including tasks would be a better pattern. - Need to establish a pattern for disabling packaging/service configurations globally in OpenStack Ansible roles. - Shared roles are open for reuse/replacement if something better is available (upstream/elsewhere). If anyone has any others, feel free to comment. Thanks, -Alex On Mon, Sep 10, 2018 at 10:58 AM, Alex Schultz wrote: > I just realized I booked the room and put it in the etherpad but > forgot to email out the time. > > Time: Tuesday 09:00-10:45 > Room: Big Thompson > > https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg > > Thanks, > -Alex > > On Tue, Sep 4, 2018 at 1:03 PM, Alex Schultz wrote: >> On Thu, Aug 9, 2018 at 2:43 PM, Mohammed Naser wrote: >>> Hi Alex, >>> >>> I am very much in favour of what you're bringing up. We do have >>> multiple projects that leverage Ansible in different ways and we all >>> end up doing the same thing at the end. The duplication of work is >>> not really beneficial for us as it takes away from our use-cases. >>> >>> I believe that there is a certain number of steps that we all share >>> regardless of how we deploy, some of the things that come up to me >>> right away are: >>> >>> - Configuring infrastructure services (i.e.: create vhosts for service >>> in rabbitmq, create databases for services, configure users for >>> rabbitmq, db, etc) >>> - Configuring inter-OpenStack services (i.e. keystone_authtoken >>> section, creating endpoints, etc and users for services) >>> - Configuring actual OpenStack services (i.e. >>> /etc//.conf file with the ability of extending >>> options) >>> - Running CI/integration on a cloud (i.e. common role that literally >>> gets an admin user, password and auth endpoint and creates all >>> resources and does CI) >>> >>> This would deduplicate a lot of work, and especially the last one, it >>> might be beneficial for more than Ansible-based projects, I can >>> imagine Puppet OpenStack leveraging this as well inside Zuul CI >>> (optionally)... However, I think that this something which we should >>> discus further for the PTG. I think that there will be a tiny bit >>> upfront work as we all standarize but then it's a win for all involved >>> communities. >>> >>> I would like to propose that deployment tools maybe sit down together >>> at the PTG, all share how we use Ansible to accomplish these tasks and >>> then perhaps we can work all together on abstracting some of these >>> concepts together for us to all leverage. >>> >> >> I'm currently trying to get a spot on Tuesday morning to further >> discuss some of this items. In the mean time I've started an >> etherpad[0] to start collecting ideas for things to discuss. At the >> moment I've got the tempest role collaboration and some basic ideas >> for best practice items that we can discuss. Feel free to add your >> own and I'll update the etherpad with a time slot when I get one >> nailed down. >> >> Thanks, >> -Alex >> >> [0] https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg >> >>> I'll let others chime in as well. >>> >>> Regards, >>> Mohammed >>> >>> On Thu, Aug 9, 2018 at 4:31 PM, Alex Schultz wrote: >>>> Ahoy folks, >>>> >>>> I think it's time we come up with some basic rules/patterns on where >>>> code lands when it comes to OpenStack related Ansible roles and as we >>>> convert/export things. There was a recent proposal to create an >>>> ansible-role-tempest[0] that would take what we use in >>>> tripleo-quickstart-extras[1] and separate it for re-usability by >>>> others. So it was asked if we could work with the openstack-ansible >>>> team and leverage the existing openstack-ansible-os_tempest[2]. It >>>> turns out we have a few more already existing roles laying around as >>>> well[3][4]. >>>> >>>> What I would like to propose is that we as a community come together >>>> to agree on specific patterns so that we can leverage the same roles >>>> for some of the core configuration/deployment functionality while >>>> still allowing for specific project specific customization. What I've >>>> noticed between all the project is that we have a few specific core >>>> pieces of functionality that needs to be handled (or skipped as it may >>>> be) for each service being deployed. >>>> >>>> 1) software installation >>>> 2) configuration management >>>> 3) service management >>>> 4) misc service actions >>>> >>>> Depending on which flavor of the deployment you're using, the content >>>> of each of these may be different. Just about the only thing that is >>>> shared between them all would be the configuration management part. >>>> To that, I was wondering if there would be a benefit to establishing a >>>> pattern within say openstack-ansible where we can disable items #1 and >>>> #3 but reuse #2 in projects like kolla/tripleo where we need to do >>>> some configuration generation. If we can't establish a similar >>>> pattern it'll make it harder to reuse and contribute between the >>>> various projects. >>>> >>>> In tripleo we've recently created a bunch of ansible-role-tripleo-* >>>> repositories which we were planning on moving the tripleo specific >>>> tasks (for upgrades, etc) to and were hoping that we might be able to >>>> reuse the upstream ansible roles similar to how we've previously >>>> leverage the puppet openstack work for configurations. So for us, it >>>> would be beneficial if we could maybe help align/contribute/guide the >>>> configuration management and maybe misc service action portions of the >>>> openstack-ansible roles, but be able to disable the actual software >>>> install/service management as that would be managed via our >>>> ansible-role-tripleo-* roles. >>>> >>>> Is this something that would be beneficial to further discuss at the >>>> PTG? Anyone have any additional suggestions/thoughts? >>>> >>>> My personal thoughts for tripleo would be that we'd have >>>> tripleo-ansible calls openstack-ansible- for core config but >>>> package/service installation disabled and calls >>>> ansible-role-tripleo- for tripleo specific actions such as >>>> opinionated packages/service configuration/upgrades. Maybe this is >>>> too complex? But at the same time, do we need to come up with 3 >>>> different ways to do this? >>>> >>>> Thanks, >>>> -Alex >>>> >>>> [0] https://review.openstack.org/#/c/589133/ >>>> [1] http://git.openstack.org/cgit/openstack/tripleo-quickstart-extras/tree/roles/validate-tempest >>>> [2] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest/ >>>> [3] http://git.openstack.org/cgit/openstack/kolla-ansible/tree/ansible/roles/tempest >>>> [4] http://git.openstack.org/cgit/openstack/ansible-role-tripleo-tempest >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> -- >>> Mohammed Naser — vexxhost >>> ----------------------------------------------------- >>> D. 514-316-8872 >>> D. 800-910-1726 ext. 200 >>> E. mnaser at vexxhost.com >>> W. http://vexxhost.com >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jungleboyj at gmail.com Tue Sep 11 17:09:30 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 11 Sep 2018 12:09:30 -0500 Subject: [openstack-dev] [ptg][cinder][placement] etherpad for this afternoon's meeting Message-ID: All, I have created an etherpad to take notes during our meeting this afternoon: https://etherpad.openstack.org/p/cinder-placement-denver-ptg-2018 If you have information you want to get in there before the meeting I would appreciate you pre-populating the pad. Jay From sfinucan at redhat.com Tue Sep 11 17:13:52 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 11 Sep 2018 11:13:52 -0600 Subject: [openstack-dev] [goals][python3][nova] starting zuul migration for nova repos In-Reply-To: <1536608885-sup-3596@lrrr.local> References: <1536608885-sup-3596@lrrr.local> Message-ID: On Mon, 2018-09-10 at 13:48 -0600, Doug Hellmann wrote: > Melanie gave me the go-ahead to propose the patches, so here's the list > of patches for the zuul migration, doc job update, and python 3.6 unit > tests for the nova repositories. I've reviewed/+2d all of these on master and think Sylvain will be following up with the +Ws. I need someone else to handle the 'stable/XXX' patches though. Here's a query for anyone that wants to jump in here. https://review.openstack.org/#/q/topic:python3-first+status:open+(openstack/nova+OR+project:openstack/nova-specs+OR+openstack/os-traits+OR+openstack/os-vif+OR+openstack/osc-placement+OR+openstack/python-novaclient) Stephen PS: Thanks, Andreas, for the follow-up cleanup patches. Much appreciated :) > +----------------------------------------------+--------------------------------+---------------+ > > Subject | Repo | Branch | > > +----------------------------------------------+--------------------------------+---------------+ > > remove job settings for nova repositories | openstack-infra/project-config | master | > > import zuul job settings from project-config | openstack/nova | master | > > switch documentation job to new PTI | openstack/nova | master | > > add python 3.6 unit test job | openstack/nova | master | > > import zuul job settings from project-config | openstack/nova | stable/ocata | > > import zuul job settings from project-config | openstack/nova | stable/pike | > > import zuul job settings from project-config | openstack/nova | stable/queens | > > import zuul job settings from project-config | openstack/nova | stable/rocky | > > import zuul job settings from project-config | openstack/nova-specs | master | > > import zuul job settings from project-config | openstack/os-traits | master | > > switch documentation job to new PTI | openstack/os-traits | master | > > add python 3.6 unit test job | openstack/os-traits | master | > > import zuul job settings from project-config | openstack/os-traits | stable/pike | > > import zuul job settings from project-config | openstack/os-traits | stable/queens | > > import zuul job settings from project-config | openstack/os-traits | stable/rocky | > > import zuul job settings from project-config | openstack/os-vif | master | > > switch documentation job to new PTI | openstack/os-vif | master | > > add python 3.6 unit test job | openstack/os-vif | master | > > import zuul job settings from project-config | openstack/os-vif | stable/ocata | > > import zuul job settings from project-config | openstack/os-vif | stable/pike | > > import zuul job settings from project-config | openstack/os-vif | stable/queens | > > import zuul job settings from project-config | openstack/os-vif | stable/rocky | > > import zuul job settings from project-config | openstack/osc-placement | master | > > switch documentation job to new PTI | openstack/osc-placement | master | > > add python 3.6 unit test job | openstack/osc-placement | master | > > import zuul job settings from project-config | openstack/osc-placement | stable/queens | > > import zuul job settings from project-config | openstack/osc-placement | stable/rocky | > > import zuul job settings from project-config | openstack/python-novaclient | master | > > switch documentation job to new PTI | openstack/python-novaclient | master | > > add python 3.6 unit test job | openstack/python-novaclient | master | > > add lib-forward-testing-python3 test job | openstack/python-novaclient | master | > > import zuul job settings from project-config | openstack/python-novaclient | stable/ocata | > > import zuul job settings from project-config | openstack/python-novaclient | stable/pike | > > import zuul job settings from project-config | openstack/python-novaclient | stable/queens | > > import zuul job settings from project-config | openstack/python-novaclient | stable/rocky | > > +----------------------------------------------+--------------------------------+---------------+ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kennelson11 at gmail.com Tue Sep 11 17:47:13 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 11 Sep 2018 11:47:13 -0600 Subject: [openstack-dev] [Storyboard] PTG Planning & Upcoming Meeting Cancelled In-Reply-To: References: Message-ID: Update! We will be in Vail this afternoon. Lunch ends at 1:30 so we hope to be starting conversations by 1:45. -Kendall (diablo_rojo) On Fri, Sep 7, 2018 at 2:07 PM Kendall Nelson wrote: > Hello! > > With the PTG in just a few days, I wanted to give some info and updates so > that you are prepared. > > 1. This coming week's regular meeting on Wednesday will be cancelled. > > 2. I am planning on booking Blanca Peak for the whole afternoon on Tuesday > for discussions. Just waiting for this patch to merge[0]. If we need more > time we can schedule something later in the week. See you there! > > 3. Here [1] is the etherpad that we've been collecting discussion topics > into. If there is anything you want to add, feel free. > > -Kendall (diablo_rojo) > > [0] https://review.openstack.org/#/c/600665/ > [1]https://etherpad.openstack.org/p/sb-stein-ptg-planning > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Tue Sep 11 18:07:29 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 11 Sep 2018 20:07:29 +0200 Subject: [openstack-dev] [cinder][ptg] Topics scheduled for next week ... In-Reply-To: References: Message-ID: <20180911180729.qmvlogbnpyc32csn@localhost> On 07/09, Jay S Bryant wrote: > Team, > > I have created an etherpad for each of the days of the PTG and split out the > proposed topics from the planning etherpad into the individual days for > discussion: [1] [2] [3] > > If you want to add an additional topic please add it to Friday or find some > time on one of the other days. > > I look forward to discussing all these topics with you all next week. > > Thanks! > > Jay Thanks Jay. I have added to the Cinder general etherpad the shared_target discussion topic, as I believe we should be discussing it in the Cinder room first before Thursday's meeting with Nova. I saw that on Wednesday the 2:30 to 3:00 privsep topic is a duplicate of the 12:00 to 12:30 slot, so I have taken the liberty of replacing it with the shared_targets one. I hope that's alright. Cheers, Gorka. > > [1] https://etherpad.openstack.org/p/cinder-ptg-stein-wednesday > > [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday > > [3] https://etherpad.openstack.org/p/cinder-ptg-stein-friday > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From james.page at canonical.com Tue Sep 11 18:09:46 2018 From: james.page at canonical.com (James Page) Date: Tue, 11 Sep 2018 12:09:46 -0600 Subject: [openstack-dev] [charms] Propose Felipe Reyes for OpenStack Charmers team In-Reply-To: <5157f326-5422-6a76-efcd-a80439e5d778@gmail.com> References: <5157f326-5422-6a76-efcd-a80439e5d778@gmail.com> Message-ID: +1 On Wed, 5 Sep 2018 at 15:48 Billy Olsen wrote: > Hi, > > I'd like to propose Felipe Reyes to join the OpenStack Charmers team as > a core member. Over the past couple of years Felipe has contributed > numerous patches and reviews to the OpenStack charms [0]. His experience > and knowledge of the charms used in OpenStack and the usage of Juju make > him a great candidate. > > [0] - > > https://review.openstack.org/#/q/owner:%22Felipe+Reyes+%253Cfelipe.reyes%2540canonical.com%253E%22 > > Thanks, > > Billy Olsen > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Sep 11 19:53:37 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 11 Sep 2018 13:53:37 -0600 Subject: [openstack-dev] [ptg][cinder][placement] etherpad for this afternoon's meeting In-Reply-To: References: Message-ID: Hi Jay, where is this discussion taking place? On Tue, Sep 11, 2018, 11:10 AM Jay S Bryant wrote: > All, > > I have created an etherpad to take notes during our meeting this > afternoon: > https://etherpad.openstack.org/p/cinder-placement-denver-ptg-2018 > > If you have information you want to get in there before the meeting I > would appreciate you pre-populating the pad. > > Jay > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Tue Sep 11 20:06:07 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 11 Sep 2018 22:06:07 +0200 Subject: [openstack-dev] [ptg][cinder][placement] etherpad for this afternoon's meeting In-Reply-To: References: Message-ID: <20180911200607.nqjlmyas7d4jevp4@localhost> On 11/09, Jay Pipes wrote: > Hi Jay, where is this discussion taking place? > Hi, It was on another email: Big Thompson Room on Tuesday from 15:15 to 17:00 Cheers, Gorka. > On Tue, Sep 11, 2018, 11:10 AM Jay S Bryant wrote: > > > All, > > > > I have created an etherpad to take notes during our meeting this > > afternoon: > > https://etherpad.openstack.org/p/cinder-placement-denver-ptg-2018 > > > > If you have information you want to get in there before the meeting I > > would appreciate you pre-populating the pad. > > > > Jay > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Tue Sep 11 20:34:33 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 11 Sep 2018 14:34:33 -0600 Subject: [openstack-dev] [nova] 2018 User Survey results Message-ID: <286ac106-70b8-b599-63e3-2b5eb7d47282@gmail.com> Hey all, The foundation sent me a copy of 2018 user survey responses to the following question about Nova: "How important is it to be able to customize Nova in your deployment, e.g. classload your own managers/drivers, use hooks, plug in API extensions, etc?" Note: this question populates for any user who indicates they are in production or testing with the Nova project. It is not a required question, so these responses do not necessarily include every user. There were a total of 373 responses. The number of responses per multiple choice answer were: - "Not important; I use pretty much stock Nova with maybe some small patches or bug fixes that aren't upstream.": 173 (46.4%) - "Somewhat important; I have some custom scheduler filters and other small patches but nothing major.": 144 (38.6%) - "Very important; my Nova deployment is heavily customized and hooks/plugins/custom APIs are a major part of my operation.": 56 (15.0%) And I made a google sheets chart out of the responses which you can view here: https://docs.google.com/spreadsheets/d/e/2PACX-1vSFG4ev8VsMMsYXgQHC7Y24WXfdSp6YdwiGX3MGvCsYZ50qG8Po-2i7vOCppJEq8051skxzvb42GIUV/pubhtml?gid=584107382&single=true Cheers, -melanie From doug at doughellmann.com Tue Sep 11 20:47:55 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 11 Sep 2018 14:47:55 -0600 Subject: [openstack-dev] [goals][python3][charms] starting zuul migration Message-ID: <1536698839-sup-5341@lrrr.local> Here are the patches for the zuul migration for the OpenStack Charms project. +-------------------------------------------------------+---------------------------------------------------+---------------+ | Subject | Repo | Branch | +-------------------------------------------------------+---------------------------------------------------+---------------+ | remove job settings for OpenStack Charms repositories | openstack-infra/project-config | master | | import zuul job settings from project-config | openstack/charm-aodh | stable/18.08 | | import zuul job settings from project-config | openstack/charm-aodh | master | | import zuul job settings from project-config | openstack/charm-barbican | stable/18.08 | | import zuul job settings from project-config | openstack/charm-barbican | master | | import zuul job settings from project-config | openstack/charm-barbican-softhsm | stable/18.08 | | import zuul job settings from project-config | openstack/charm-barbican-softhsm | master | | import zuul job settings from project-config | openstack/charm-ceilometer | stable/18.08 | | import zuul job settings from project-config | openstack/charm-ceilometer | master | | import zuul job settings from project-config | openstack/charm-ceilometer-agent | stable/18.08 | | import zuul job settings from project-config | openstack/charm-ceilometer-agent | master | | import zuul job settings from project-config | openstack/charm-ceph | master | | import zuul job settings from project-config | openstack/charm-ceph-fs | stable/18.08 | | import zuul job settings from project-config | openstack/charm-ceph-fs | master | | import zuul job settings from project-config | openstack/charm-ceph-mon | stable/18.08 | | import zuul job settings from project-config | openstack/charm-ceph-mon | master | | import zuul job settings from project-config | openstack/charm-ceph-osd | stable/18.08 | | import zuul job settings from project-config | openstack/charm-ceph-osd | master | | import zuul job settings from project-config | openstack/charm-ceph-proxy | stable/18.08 | | import zuul job settings from project-config | openstack/charm-ceph-proxy | master | | import zuul job settings from project-config | openstack/charm-ceph-radosgw | stable/18.08 | | import zuul job settings from project-config | openstack/charm-ceph-radosgw | master | | import zuul job settings from project-config | openstack/charm-cinder | stable/18.08 | | import zuul job settings from project-config | openstack/charm-cinder | master | | import zuul job settings from project-config | openstack/charm-cinder-backup | stable/18.08 | | import zuul job settings from project-config | openstack/charm-cinder-backup | master | | import zuul job settings from project-config | openstack/charm-cinder-ceph | stable/18.08 | | import zuul job settings from project-config | openstack/charm-cinder-ceph | master | | import zuul job settings from project-config | openstack/charm-cloudkitty | master | | import zuul job settings from project-config | openstack/charm-deployment-guide | master | | import zuul job settings from project-config | openstack/charm-deployment-guide | stable/pike | | import zuul job settings from project-config | openstack/charm-deployment-guide | stable/queens | | import zuul job settings from project-config | openstack/charm-deployment-guide | stable/rocky | | import zuul job settings from project-config | openstack/charm-designate | stable/18.08 | | import zuul job settings from project-config | openstack/charm-designate | master | | import zuul job settings from project-config | openstack/charm-designate-bind | stable/18.08 | | import zuul job settings from project-config | openstack/charm-designate-bind | master | | import zuul job settings from project-config | openstack/charm-glance | stable/18.08 | | import zuul job settings from project-config | openstack/charm-glance | master | | import zuul job settings from project-config | openstack/charm-glance-simplestreams-sync | stable/18.08 | | import zuul job settings from project-config | openstack/charm-glance-simplestreams-sync | master | | import zuul job settings from project-config | openstack/charm-glusterfs | master | | import zuul job settings from project-config | openstack/charm-gnocchi | stable/18.08 | | import zuul job settings from project-config | openstack/charm-gnocchi | master | | import zuul job settings from project-config | openstack/charm-guide | master | | import zuul job settings from project-config | openstack/charm-guide | stable/queens | | import zuul job settings from project-config | openstack/charm-guide | stable/rocky | | import zuul job settings from project-config | openstack/charm-hacluster | stable/18.08 | | import zuul job settings from project-config | openstack/charm-hacluster | master | | import zuul job settings from project-config | openstack/charm-heat | stable/18.08 | | import zuul job settings from project-config | openstack/charm-heat | master | | import zuul job settings from project-config | openstack/charm-interface-bind-rndc | master | | import zuul job settings from project-config | openstack/charm-interface-ceph-client | master | | import zuul job settings from project-config | openstack/charm-interface-ceph-mds | master | | import zuul job settings from project-config | openstack/charm-interface-hacluster | master | | import zuul job settings from project-config | openstack/charm-interface-keystone | master | | import zuul job settings from project-config | openstack/charm-interface-keystone-admin | master | | import zuul job settings from project-config | openstack/charm-interface-keystone-credentials | master | | import zuul job settings from project-config | openstack/charm-interface-keystone-domain-backend | master | | import zuul job settings from project-config | openstack/charm-interface-manila-plugin | master | | import zuul job settings from project-config | openstack/charm-interface-mysql-shared | master | | import zuul job settings from project-config | openstack/charm-interface-neutron-plugin | master | | import zuul job settings from project-config | openstack/charm-interface-odl-controller-api | master | | import zuul job settings from project-config | openstack/charm-interface-openstack-ha | master | | import zuul job settings from project-config | openstack/charm-interface-rabbitmq | master | | import zuul job settings from project-config | openstack/charm-ironic | master | | import zuul job settings from project-config | openstack/charm-keystone | stable/18.08 | | import zuul job settings from project-config | openstack/charm-keystone | master | | import zuul job settings from project-config | openstack/charm-keystone-ldap | stable/18.08 | | import zuul job settings from project-config | openstack/charm-keystone-ldap | master | | import zuul job settings from project-config | openstack/charm-layer-ceph-base | master | | import zuul job settings from project-config | openstack/charm-layer-openstack | master | | import zuul job settings from project-config | openstack/charm-layer-openstack-api | master | | import zuul job settings from project-config | openstack/charm-layer-openstack-principle | master | | import zuul job settings from project-config | openstack/charm-lxd | stable/18.08 | | import zuul job settings from project-config | openstack/charm-lxd | master | | import zuul job settings from project-config | openstack/charm-manila | stable/18.08 | | import zuul job settings from project-config | openstack/charm-manila | master | | import zuul job settings from project-config | openstack/charm-manila-generic | stable/18.08 | | import zuul job settings from project-config | openstack/charm-manila-generic | master | | import zuul job settings from project-config | openstack/charm-manila-glusterfs | master | | import zuul job settings from project-config | openstack/charm-murano | master | | import zuul job settings from project-config | openstack/charm-neutron-api | stable/18.08 | | import zuul job settings from project-config | openstack/charm-neutron-api | master | | import zuul job settings from project-config | openstack/charm-neutron-api-genericswitch | master | | import zuul job settings from project-config | openstack/charm-neutron-api-odl | stable/18.08 | | import zuul job settings from project-config | openstack/charm-neutron-api-odl | master | | import zuul job settings from project-config | openstack/charm-neutron-dynamic-routing | stable/18.08 | | import zuul job settings from project-config | openstack/charm-neutron-dynamic-routing | master | | import zuul job settings from project-config | openstack/charm-neutron-gateway | stable/18.08 | | import zuul job settings from project-config | openstack/charm-neutron-gateway | master | | import zuul job settings from project-config | openstack/charm-neutron-openvswitch | stable/18.08 | | import zuul job settings from project-config | openstack/charm-neutron-openvswitch | master | | import zuul job settings from project-config | openstack/charm-nova-cloud-controller | stable/18.08 | | import zuul job settings from project-config | openstack/charm-nova-cloud-controller | master | | import zuul job settings from project-config | openstack/charm-nova-compute | stable/18.08 | | import zuul job settings from project-config | openstack/charm-nova-compute | master | | import zuul job settings from project-config | openstack/charm-nova-compute-proxy | stable/18.08 | | import zuul job settings from project-config | openstack/charm-nova-compute-proxy | master | | import zuul job settings from project-config | openstack/charm-odl-controller | stable/18.08 | | import zuul job settings from project-config | openstack/charm-odl-controller | master | | import zuul job settings from project-config | openstack/charm-openstack-dashboard | stable/18.08 | | import zuul job settings from project-config | openstack/charm-openstack-dashboard | master | | import zuul job settings from project-config | openstack/charm-openvswitch-odl | stable/18.08 | | import zuul job settings from project-config | openstack/charm-openvswitch-odl | master | | import zuul job settings from project-config | openstack/charm-percona-cluster | stable/18.08 | | import zuul job settings from project-config | openstack/charm-percona-cluster | master | | import zuul job settings from project-config | openstack/charm-rabbitmq-server | stable/18.08 | | import zuul job settings from project-config | openstack/charm-rabbitmq-server | master | | import zuul job settings from project-config | openstack/charm-specs | master | | import zuul job settings from project-config | openstack/charm-swift-proxy | stable/18.08 | | import zuul job settings from project-config | openstack/charm-swift-proxy | master | | import zuul job settings from project-config | openstack/charm-swift-storage | stable/18.08 | | import zuul job settings from project-config | openstack/charm-swift-storage | master | | import zuul job settings from project-config | openstack/charm-tempest | stable/18.08 | | import zuul job settings from project-config | openstack/charm-tempest | master | | import zuul job settings from project-config | openstack/charm-trove | master | | import zuul job settings from project-config | openstack/charms.ceph | master | | import zuul job settings from project-config | openstack/charms.openstack | master | +-------------------------------------------------------+---------------------------------------------------+---------------+ From ryan.beisner at canonical.com Tue Sep 11 21:07:01 2018 From: ryan.beisner at canonical.com (Ryan Beisner) Date: Tue, 11 Sep 2018 16:07:01 -0500 Subject: [openstack-dev] [charms] Propose Felipe Reyes for OpenStack Charmers team In-Reply-To: References: <5157f326-5422-6a76-efcd-a80439e5d778@gmail.com> Message-ID: +1 I'm always happy to see Felipe's contributions and fixes come through. Cheers! Ryan On Tue, Sep 11, 2018 at 1:10 PM James Page wrote: > +1 > > On Wed, 5 Sep 2018 at 15:48 Billy Olsen wrote: > >> Hi, >> >> I'd like to propose Felipe Reyes to join the OpenStack Charmers team as >> a core member. Over the past couple of years Felipe has contributed >> numerous patches and reviews to the OpenStack charms [0]. His experience >> and knowledge of the charms used in OpenStack and the usage of Juju make >> him a great candidate. >> >> [0] - >> >> https://review.openstack.org/#/q/owner:%22Felipe+Reyes+%253Cfelipe.reyes%2540canonical.com%253E%22 >> >> Thanks, >> >> Billy Olsen >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jude at judecross.com Tue Sep 11 21:58:30 2018 From: jude at judecross.com (Jude Cross) Date: Tue, 11 Sep 2018 14:58:30 -0700 Subject: [openstack-dev] =?utf-8?b?562U5aSNOiBbc2VubGluXSBOb21pbmF0aW9u?= =?utf-8?q?s_to_Senlin_Core_Team?= In-Reply-To: <201809111929090515010@zte.com.cn> References: <201809111929090515010@zte.com.cn> Message-ID: +1 for Erik On Tue, Sep 11, 2018 at 4:29 AM wrote: > > +1 for both > > > 原始邮件 > *发件人:*DucTruong > *收件人:*openstack-dev at lists.openstack.org > > *日 期 :*2018年09月11日 01:00 > *主 题 :**[openstack-dev] [senlin] Nominations to Senlin Core Team* > Hi Senlin Core Team, > > I would like to nominate 2 new core reviewers for Senlin: > > [1] Jude Cross (jucross at blizzard.com) > [2] Erik Olof Gunnar Andersson (eandersson at blizzard.com) > > Jude has been doing a number of reviews and contributed some important > patches to Senlin during the Rocky cycle that resolved locking > problems. > > Erik has the most number of reviews in Rocky and has contributed high > quality code reviews for some time. > > [1] > http://stackalytics.com/?module=senlin-group&metric=marks&release=rocky&user_id=jucross at blizzard.com > [2] > http://stackalytics.com/?module=senlin-group&metric=marks&user_id=eandersson&release=rocky > > Voting is open for 7 days. Please reply with your +1 vote in favor or > -1 as a veto vote. > > Regards, > > Duc (dtruong) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Tue Sep 11 22:08:36 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 11 Sep 2018 16:08:36 -0600 Subject: [openstack-dev] [heat][glance] Heat image resource support issue In-Reply-To: References: Message-ID: Thanks Abhishek I already add that to Glance PTG etherpad. Since we got schedule conflict so just let me know if we should be there as well, otherwise hope you guys can help to resolve that issue. Thx! btw, if you do require us to be there, might better schedule in the afternoon on Wed. or Thu. On Thu, Sep 6, 2018 at 4:45 AM Abhishek Kekane wrote: > Hi Rico, > > Session times are not decided yet, could you please add your topic on [1] > so that it will be on discussion list. > Also glance sessions are scheduled from Wednesday to Friday between 9 to 5 > PM, so you can drop by as per your convenience. > > [] https://etherpad.openstack.org/p/stein-ptg-glance-planning > > > Thanks & Best Regards, > > Abhishek Kekane > > On Thu, Sep 6, 2018 at 3:48 PM, Rico Lin > wrote: > >> >> On Thu, Sep 6, 2018 at 12:52 PM Abhishek Kekane >> wrote: >> >>> Hi Rico, >>> >>> We will discuss this during PTG, however meantime you can add >>> WSGI_MODE=mod_wsgi in local.conf for testing purpose. >>> >> >> Cool, If you can let me know which session it's, I will try to be there >> if no conflict >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Sep 11 22:27:12 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 11 Sep 2018 16:27:12 -0600 Subject: [openstack-dev] [upgrade] request for pre-upgrade check for db purge In-Reply-To: References: Message-ID: <87668fc4-c2a2-9b0e-8c3e-4843319cbd87@gmail.com> On 9/11/2018 9:01 AM, Dan Smith wrote: > I dunno, adding something to nova.conf that is only used for nova-status > like that seems kinda weird to me. It's just a warning/informational > sort of thing so it just doesn't seem worth the complication to me. It doesn't seem complicated to me, I'm not sure why the config is weird, but maybe just because it's config-driven CLI behavior? > > Moving it to an age thing set at one year seems okay, and better than > making the absolute limit more configurable. > > Any reason why this wouldn't just be a command line flag to status if > people want it to behave in a specific way from a specific tool? I always think of the pre-upgrade checks as release-specific and we could drop the old ones at some point, so that's why I wasn't thinking about adding check-specific options to the command - but since we also say it's OK to run "nova-status upgrade check" to verify a green install, it's probably good to leave the old checks in place, i.e. you're likely always going to want those cells v2 and placement checks we added in ocata even long after ocata EOL. -- Thanks, Matt From jtomasek at redhat.com Tue Sep 11 22:43:04 2018 From: jtomasek at redhat.com (Jiri Tomasek) Date: Tue, 11 Sep 2018 16:43:04 -0600 Subject: [openstack-dev] [tripleo] Posibilities to aggregate/merge configs across templates In-Reply-To: References: Message-ID: Hi, The problems you're describing are close to the discussion we had with Mathieu Bultel here [1]. Currently to set some parameters values as ultimate source of truth, you need to put them in plan-environment.yaml. Ignoring the fact that CLI now merges environments itself (fixed by [2] and not affecting this behaviour), the Mistral workflows pass the environments to heat in order in which they are provided with -e option and then as last environment it applies parameter_defaults from plan-environment.yaml. The result of [1] effort is going to be that the way deployment configuration (roles setting, networks selection, environments selection and explicit parameters setting) is going to be done the same by both CLI and GUI through Mistral Workflows which already exist but are used only by GUI. When you look at plan-environment.yaml in Swift, you can see the list of environment files in order in which they're merged as well as parameters which are going to override the values in environments in case of collision. Merging strategy for parameters is an interesting problem, configuring this in t-h-t looks like a good solution to me. Note that the GUI always displays the parameter values which it is getting from GetParameters Mistral action. This action gets the parameter values from Heat by running heat validate. This means that it always displays the real parameter values which are actually going to be applied by Heat as a result of all the merging. If user updates that value with GUI it will end up being set in plan-environment.yaml. -- Jirka [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134511.html [2] https://review.openstack.org/#/c/448209/ On Tue, Sep 4, 2018 at 9:54 AM Kamil Sambor wrote: > Hi all, > > I want to start discussion on: how to solve issue with merging environment > values in TripleO. > > Description: > In TripleO we experience some issues related to setting parameters in heat > templates. First, it isn't possible to set some params as ultimate source > of truth (disallow to overwrite param in other heat templates). Second it > isn't possible to merge values from different templates [0][1]. > Both features are implemented in heat and can be easly used in > templates.[2][3] > This doesn't work in TripleO because we overwrite all values in template in > python client instead of aggregating them etc. orsimply let heat do the > job .[4][5] > > Solution: > Example solutions are: we can fix how python tripleo client works with env > and templates and enable heat features or we can write some puppet code > that will work similar to firewall code [6] and will support aggregate and > merge values that we point out. Both solutions have pros and cons but IMHO > solution which give heat to do job is preferable. But solution with merging > give us possibilities to have full control on merging of environments. > > Problems: > Only few as a start: With both solutions we will have the same problem, > porting new patches which will use this functionalities to older version of > rhel. Also upgrades can be really problematic to new version. Also changes > which will enable heat feature will totally change how templates work and > we > will need to change all templates and change default behavior (which is > merge > params) to override behavior and also add posibilities to run temporaly old > behavior. > > On the end, I prepared two patchsets with two PoC in progress. First one > with > merging env in tripleo client but with using heat merging functionality: > https://review.openstack.org/#/c/599322/ . And second where we ignore > merget > env and move all files and add them into deployment plan enviroments. > https://review.openstack.org/#/c/599559/ > > What do you think about each of solution?Which solution should be used > in TripleO? > > Best, > Kamil Sambor > > [0] https://bugs.launchpad.net/tripleo/+bug/1716391 > [1] https://bugs.launchpad.net/heat/+bug/1635409 > [2] > https://docs.openstack.org/heat/pike/template_guide/environment.html#restrict-update-or-replace-of-a-given-resource > [3] > https://docs.openstack.org/heat/pike/template_guide/environment.html#environment-merging > [4] > https://github.com/openstack/python-tripleoclient/blob/master/tripleoclient/utils.py#L1019 > [5] > https://github.com/openstack/python-heatclient/blob/f73c2a4177377b710a02577feea38560b00a24bf/heatclient/common/template_utils.py#L191 > [6] > https://github.com/openstack/puppet-tripleo/tree/master/manifests/firewall > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Sep 11 22:56:55 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 12 Sep 2018 07:56:55 +0900 Subject: [openstack-dev] [QA][PTG] QA Dinner Night In-Reply-To: <165c32f84a0.12590ec99138735.7433594415948373989@ghanshyammann.com> References: <165c2eca924.f0bde864135262.3309860175009501982@ghanshyammann.com> <0980e42e-5a32-9925-195e-4066f4fcae02@suse.com> <165c32f84a0.12590ec99138735.7433594415948373989@ghanshyammann.com> Message-ID: <165cada1707.11837295632026.6095949783595060282@ghanshyammann.com> Hi All, We have finalized the place and time for QA dinner which is tomorrow night. Here are the details: Restaurant : Famous Dave's - https://goo.gl/maps/G7gjpsJUEV72 Wednesday night, 6:30 PM Meeting time at lobby: 6.15 PM -gmann ---- On Mon, 10 Sep 2018 20:13:15 +0900 Ghanshyam Mann wrote ---- > > > > ---- On Mon, 10 Sep 2018 19:35:58 +0900 Andreas Jaeger wrote ---- > > On 10/09/2018 12.00, Ghanshyam Mann wrote: > > > Hi All, > > > > > > I'd like to propose a QA Dinner night for the QA team at the DENVER PTG. I initiated a doodle vote [1] to choose Tuesday or Wednesday night. > > > > Dublin or Denver? Hope you're not time traveling or went to wrong > > location ;) > > > > heh, thanks for correction. Yes it is Denver :). > > > > Andreas > > > > > NOTE: Anyone engaged in QA activities (not necessary to be QA core) are welcome to join. > > > > > > > > > [1] https://doodle.com/poll/68fudz937v22ghnv > > > > > > -gmann > > > > > > > > > > > > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > -- > > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > > HRB 21284 (AG Nürnberg) > > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > > > > > From tengqim at cn.ibm.com Wed Sep 12 03:48:31 2018 From: tengqim at cn.ibm.com (Qiming Teng) Date: Wed, 12 Sep 2018 03:48:31 +0000 Subject: [openstack-dev] [senlin] Nominations to Senlin Core Team In-Reply-To: References: Message-ID: <20180912034830.GA14550@rcp.sl.cloud9.ibm.com> +2 to both changes. - Qiming From chris.macnaughton at canonical.com Wed Sep 12 05:50:17 2018 From: chris.macnaughton at canonical.com (Chris MacNaughton) Date: Wed, 12 Sep 2018 07:50:17 +0200 Subject: [openstack-dev] [charms] Propose Felipe Reyes for OpenStack Charmers team In-Reply-To: References: <5157f326-5422-6a76-efcd-a80439e5d778@gmail.com> Message-ID: <397c2a43-42b5-dd92-81cf-120532aa27a4@canonical.com> +1 Felipe has been a solid contributor to the Openstack Charms for some time now. Chris On 11-09-18 23:07, Ryan Beisner wrote: > +1  I'm always happy to see Felipe's contributions and fixes come through. > > Cheers! > > Ryan > > > > > On Tue, Sep 11, 2018 at 1:10 PM James Page > wrote: > > +1 > > On Wed, 5 Sep 2018 at 15:48 Billy Olsen > wrote: > > Hi, > > I'd like to propose Felipe Reyes to join the OpenStack > Charmers team as > a core member. Over the past couple of years Felipe has > contributed > numerous patches and reviews to the OpenStack charms [0]. His > experience > and knowledge of the charms used in OpenStack and the usage of > Juju make > him a great candidate. > > [0] - > https://review.openstack.org/#/q/owner:%22Felipe+Reyes+%253Cfelipe.reyes%2540canonical.com%253E%22 > > Thanks, > > Billy Olsen > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cybing4 at gmail.com Wed Sep 12 06:07:30 2018 From: cybing4 at gmail.com (Yuanbing Chen) Date: Wed, 12 Sep 2018 14:07:30 +0800 Subject: [openstack-dev] [senlin] Nominations to Senlin Core Team Message-ID: +1 I agree with you From: Duc Truong Date: Mon, Sep 10, 2018 at 9:59 AM Subject: [openstack-dev][senlin] Nominations to Senlin Core Team To: Hi Senlin Core Team, I would like to nominate 2 new core reviewers for Senlin: [1] Jude Cross (jucross at blizzard.com) [2] Erik Olof Gunnar Andersson (eandersson at blizzard.com) Jude has been doing a number of reviews and contributed some important patches to Senlin during the Rocky cycle that resolved locking problems. Erik has the most number of reviews in Rocky and has contributed high quality code reviews for some time. [1] http://stackalytics.com/?module=senlin-group&metric=marks&release=rocky&user_id=jucross at blizzard.com [2] http://stackalytics.com/?module=senlin-group&metric=marks&user_id=eandersson&release=rocky Voting is open for 7 days. Please reply with your +1 vote in favor or -1 as a veto vote. Regards, Duc (dtruong) -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel at redhat.com Wed Sep 12 06:23:34 2018 From: michel at redhat.com (Michel Peterson) Date: Wed, 12 Sep 2018 09:23:34 +0300 Subject: [openstack-dev] [networking-odl][networking-bgpvpn][ceilometer] all requirement updates are currently blocked In-Reply-To: <20180911042914.GI16495@thor.bakeyournoodle.com> References: <20180901005209.xb5ej2ifw3bzb5zf@gentoo.org> <20180905150309.cxstnk6i2sms6pj4@gentoo.org> <20180911042914.GI16495@thor.bakeyournoodle.com> Message-ID: On Tue, Sep 11, 2018 at 7:29 AM, Tony Breeds wrote: > > So I think we have the required reviews lined up to fix master, but they > need votes from zuul and core teams. > > Thanks a lot for the work, Tony. On the n-odl side, when the Depends-On gets merged I'll give it a +W. -------------- next part -------------- An HTML attachment was scrubbed... URL: From liam.young at canonical.com Wed Sep 12 06:29:41 2018 From: liam.young at canonical.com (Liam Young) Date: Wed, 12 Sep 2018 07:29:41 +0100 Subject: [openstack-dev] [charms] Propose Felipe Reyes for OpenStack Charmers team In-Reply-To: <397c2a43-42b5-dd92-81cf-120532aa27a4@canonical.com> References: <5157f326-5422-6a76-efcd-a80439e5d778@gmail.com> <397c2a43-42b5-dd92-81cf-120532aa27a4@canonical.com> Message-ID: +1 and thanks for all your contributions Felipe! On Wed, Sep 12, 2018 at 6:51 AM Chris MacNaughton < chris.macnaughton at canonical.com> wrote: > +1 Felipe has been a solid contributor to the Openstack Charms for some > time now. > > Chris > > On 11-09-18 23:07, Ryan Beisner wrote: > > +1 I'm always happy to see Felipe's contributions and fixes come through. > > Cheers! > > Ryan > > > > > On Tue, Sep 11, 2018 at 1:10 PM James Page > wrote: > >> +1 >> >> On Wed, 5 Sep 2018 at 15:48 Billy Olsen wrote: >> >>> Hi, >>> >>> I'd like to propose Felipe Reyes to join the OpenStack Charmers team as >>> a core member. Over the past couple of years Felipe has contributed >>> numerous patches and reviews to the OpenStack charms [0]. His experience >>> and knowledge of the charms used in OpenStack and the usage of Juju make >>> him a great candidate. >>> >>> [0] - >>> >>> https://review.openstack.org/#/q/owner:%22Felipe+Reyes+%253Cfelipe.reyes%2540canonical.com%253E%22 >>> >>> Thanks, >>> >>> Billy Olsen >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smonderer at vasonanetworks.com Wed Sep 12 08:53:36 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Wed, 12 Sep 2018 11:53:36 +0300 Subject: [openstack-dev] [tripleo] VFs not configured in SR-IOV role In-Reply-To: References: Message-ID: Hi Saravanan, I'm using RHOSP13. The neutron-sriov-agent.yaml is missing "OS::TripleO::Services::NeutronSriovHostConfig" Regards, Samuel On Fri, Sep 7, 2018 at 1:08 PM Saravanan KR wrote: > Not sure which version you are using, but the service > "OS::TripleO::Services::NeutronSriovHostConfig" is responsible for > setting up VFs. Check if this service is enabled in the deployment. > One of the missing place is being fixed - > https://review.openstack.org/#/c/597985/ > > Regards, > Saravanan KR > On Tue, Sep 4, 2018 at 8:58 PM Samuel Monderer > wrote: > > > > Hi, > > > > Attached is the used to deploy an overcloud with SR-IOV role. > > The deployment completed successfully but the VFs aren't configured on > the host. > > Can anyone have a look at what I missed. > > > > Thanks > > Samuel > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Wed Sep 12 09:37:17 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Wed, 12 Sep 2018 11:37:17 +0200 Subject: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do In-Reply-To: References: Message-ID: <00d727de-a121-5fe6-9bc6-39ddaac2d4d5@redhat.com> On 11.9.2018 18:53, Alex Schultz wrote: > Thanks everyone for coming and chatting. From the meeting we've had a > few items where we can collaborate together. > > Here are some specific bullet points: > > - TripleO folks should feel free to propose some minor structural > changes if they make the integration easier. TripleO is currently > investigating what it would look like to pull the keystone ansible > parts out of tripleo-heat-templates and put it into > ansible-role-tripleo-keystone. It would be beneficial to use this > role as an example for how the os_keystone role can be consumed. +1, please let's also focus on information flow towards Upgrades squad (and collab if needed) as ansible-role-tripleo-keystone is being implemented. It sounds like something that might potentially require rethinking how updates/upgrades work too. Maybe we could have a spec where the solution is described and we can assess/discuss the upgrades impact. Thanks Jirka > - The openstack-ansible-tests has some good examples of ansible-lint > rules that can be used to improve quality > - Tags could be used to limit the scope of OpenStack Ansible roles, > but it sounds like including tasks would be a better pattern. > - Need to establish a pattern for disabling packaging/service > configurations globally in OpenStack Ansible roles. > - Shared roles are open for reuse/replacement if something better is > available (upstream/elsewhere). > > If anyone has any others, feel free to comment. > > Thanks, > -Alex > > On Mon, Sep 10, 2018 at 10:58 AM, Alex Schultz wrote: >> I just realized I booked the room and put it in the etherpad but >> forgot to email out the time. >> >> Time: Tuesday 09:00-10:45 >> Room: Big Thompson >> >> https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg >> >> Thanks, >> -Alex >> >> On Tue, Sep 4, 2018 at 1:03 PM, Alex Schultz wrote: >>> On Thu, Aug 9, 2018 at 2:43 PM, Mohammed Naser wrote: >>>> Hi Alex, >>>> >>>> I am very much in favour of what you're bringing up. We do have >>>> multiple projects that leverage Ansible in different ways and we all >>>> end up doing the same thing at the end. The duplication of work is >>>> not really beneficial for us as it takes away from our use-cases. >>>> >>>> I believe that there is a certain number of steps that we all share >>>> regardless of how we deploy, some of the things that come up to me >>>> right away are: >>>> >>>> - Configuring infrastructure services (i.e.: create vhosts for service >>>> in rabbitmq, create databases for services, configure users for >>>> rabbitmq, db, etc) >>>> - Configuring inter-OpenStack services (i.e. keystone_authtoken >>>> section, creating endpoints, etc and users for services) >>>> - Configuring actual OpenStack services (i.e. >>>> /etc//.conf file with the ability of extending >>>> options) >>>> - Running CI/integration on a cloud (i.e. common role that literally >>>> gets an admin user, password and auth endpoint and creates all >>>> resources and does CI) >>>> >>>> This would deduplicate a lot of work, and especially the last one, it >>>> might be beneficial for more than Ansible-based projects, I can >>>> imagine Puppet OpenStack leveraging this as well inside Zuul CI >>>> (optionally)... However, I think that this something which we should >>>> discus further for the PTG. I think that there will be a tiny bit >>>> upfront work as we all standarize but then it's a win for all involved >>>> communities. >>>> >>>> I would like to propose that deployment tools maybe sit down together >>>> at the PTG, all share how we use Ansible to accomplish these tasks and >>>> then perhaps we can work all together on abstracting some of these >>>> concepts together for us to all leverage. >>>> >>> >>> I'm currently trying to get a spot on Tuesday morning to further >>> discuss some of this items. In the mean time I've started an >>> etherpad[0] to start collecting ideas for things to discuss. At the >>> moment I've got the tempest role collaboration and some basic ideas >>> for best practice items that we can discuss. Feel free to add your >>> own and I'll update the etherpad with a time slot when I get one >>> nailed down. >>> >>> Thanks, >>> -Alex >>> >>> [0] https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg >>> >>>> I'll let others chime in as well. >>>> >>>> Regards, >>>> Mohammed >>>> >>>> On Thu, Aug 9, 2018 at 4:31 PM, Alex Schultz wrote: >>>>> Ahoy folks, >>>>> >>>>> I think it's time we come up with some basic rules/patterns on where >>>>> code lands when it comes to OpenStack related Ansible roles and as we >>>>> convert/export things. There was a recent proposal to create an >>>>> ansible-role-tempest[0] that would take what we use in >>>>> tripleo-quickstart-extras[1] and separate it for re-usability by >>>>> others. So it was asked if we could work with the openstack-ansible >>>>> team and leverage the existing openstack-ansible-os_tempest[2]. It >>>>> turns out we have a few more already existing roles laying around as >>>>> well[3][4]. >>>>> >>>>> What I would like to propose is that we as a community come together >>>>> to agree on specific patterns so that we can leverage the same roles >>>>> for some of the core configuration/deployment functionality while >>>>> still allowing for specific project specific customization. What I've >>>>> noticed between all the project is that we have a few specific core >>>>> pieces of functionality that needs to be handled (or skipped as it may >>>>> be) for each service being deployed. >>>>> >>>>> 1) software installation >>>>> 2) configuration management >>>>> 3) service management >>>>> 4) misc service actions >>>>> >>>>> Depending on which flavor of the deployment you're using, the content >>>>> of each of these may be different. Just about the only thing that is >>>>> shared between them all would be the configuration management part. >>>>> To that, I was wondering if there would be a benefit to establishing a >>>>> pattern within say openstack-ansible where we can disable items #1 and >>>>> #3 but reuse #2 in projects like kolla/tripleo where we need to do >>>>> some configuration generation. If we can't establish a similar >>>>> pattern it'll make it harder to reuse and contribute between the >>>>> various projects. >>>>> >>>>> In tripleo we've recently created a bunch of ansible-role-tripleo-* >>>>> repositories which we were planning on moving the tripleo specific >>>>> tasks (for upgrades, etc) to and were hoping that we might be able to >>>>> reuse the upstream ansible roles similar to how we've previously >>>>> leverage the puppet openstack work for configurations. So for us, it >>>>> would be beneficial if we could maybe help align/contribute/guide the >>>>> configuration management and maybe misc service action portions of the >>>>> openstack-ansible roles, but be able to disable the actual software >>>>> install/service management as that would be managed via our >>>>> ansible-role-tripleo-* roles. >>>>> >>>>> Is this something that would be beneficial to further discuss at the >>>>> PTG? Anyone have any additional suggestions/thoughts? >>>>> >>>>> My personal thoughts for tripleo would be that we'd have >>>>> tripleo-ansible calls openstack-ansible- for core config but >>>>> package/service installation disabled and calls >>>>> ansible-role-tripleo- for tripleo specific actions such as >>>>> opinionated packages/service configuration/upgrades. Maybe this is >>>>> too complex? But at the same time, do we need to come up with 3 >>>>> different ways to do this? >>>>> >>>>> Thanks, >>>>> -Alex >>>>> >>>>> [0] https://review.openstack.org/#/c/589133/ >>>>> [1] http://git.openstack.org/cgit/openstack/tripleo-quickstart-extras/tree/roles/validate-tempest >>>>> [2] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest/ >>>>> [3] http://git.openstack.org/cgit/openstack/kolla-ansible/tree/ansible/roles/tempest >>>>> [4] http://git.openstack.org/cgit/openstack/ansible-role-tripleo-tempest >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>> -- >>>> Mohammed Naser — vexxhost >>>> ----------------------------------------------------- >>>> D. 514-316-8872 >>>> D. 800-910-1726 ext. 200 >>>> E. mnaser at vexxhost.com >>>> W. http://vexxhost.com >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From alex.kavanagh at canonical.com Wed Sep 12 10:03:25 2018 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Wed, 12 Sep 2018 11:03:25 +0100 Subject: [openstack-dev] [charms] Propose Felipe Reyes for OpenStack Charmers team In-Reply-To: References: <5157f326-5422-6a76-efcd-a80439e5d778@gmail.com> <397c2a43-42b5-dd92-81cf-120532aa27a4@canonical.com> Message-ID: +1 from me too. On Wed, Sep 12, 2018 at 7:29 AM, Liam Young wrote: > +1 and thanks for all your contributions Felipe! > > On Wed, Sep 12, 2018 at 6:51 AM Chris MacNaughton < > chris.macnaughton at canonical.com> wrote: > >> +1 Felipe has been a solid contributor to the Openstack Charms for some >> time now. >> >> Chris >> >> On 11-09-18 23:07, Ryan Beisner wrote: >> >> +1 I'm always happy to see Felipe's contributions and fixes come through. >> >> Cheers! >> >> Ryan >> >> >> >> >> On Tue, Sep 11, 2018 at 1:10 PM James Page >> wrote: >> >>> +1 >>> >>> On Wed, 5 Sep 2018 at 15:48 Billy Olsen wrote: >>> >>>> Hi, >>>> >>>> I'd like to propose Felipe Reyes to join the OpenStack Charmers team as >>>> a core member. Over the past couple of years Felipe has contributed >>>> numerous patches and reviews to the OpenStack charms [0]. His experience >>>> and knowledge of the charms used in OpenStack and the usage of Juju make >>>> him a great candidate. >>>> >>>> [0] - >>>> https://review.openstack.org/#/q/owner:%22Felipe+Reyes+% >>>> 253Cfelipe.reyes%2540canonical.com%253E%22 >>>> >>>> Thanks, >>>> >>>> Billy Olsen >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>>> unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Alex Kavanagh - Software Engineer Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From shyam.biradar at trilio.io Wed Sep 12 11:21:29 2018 From: shyam.biradar at trilio.io (Shyam Biradar) Date: Wed, 12 Sep 2018 16:51:29 +0530 Subject: [openstack-dev] Committing proprietary plugins to OpenStack Message-ID: Hi, We have a proprietary openstack plugin. We want to commit deployment scripts like containers and heat templates to upstream in tripleo and kolla project but not actual product code. Is it possible? Or How can we handle this case? Any thoughts are welcome. [image: logo] *Shyam Biradar* * Software Engineer | DevOps* M +91 8600266938 | shyam.biradar at trilio.io | trilio.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Wed Sep 12 11:54:47 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 12 Sep 2018 13:54:47 +0200 Subject: [openstack-dev] [kolla] Committing proprietary plugins to OpenStack In-Reply-To: References: Message-ID: <6ece4952-6ea2-70e6-2b7d-3c2d4dbe8287@suse.com> On 2018-09-12 13:21, Shyam Biradar wrote: > Hi, > > We have a proprietary openstack plugin. We want to commit deployment > scripts like containers and heat templates to upstream in tripleo and > kolla project but not actual product code. > > Is it possible? Or How can we handle this case? Any thoughts are welcome. It's first a legal question - is everything you are pushing under the Apache license as the rest of the project that you push to? And then a policy of kolla project, so let me tag them Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From shyam.biradar at trilio.io Wed Sep 12 12:01:48 2018 From: shyam.biradar at trilio.io (Shyam Biradar) Date: Wed, 12 Sep 2018 17:31:48 +0530 Subject: [openstack-dev] [kolla] Committing proprietary plugins to OpenStack In-Reply-To: <6ece4952-6ea2-70e6-2b7d-3c2d4dbe8287@suse.com> References: <6ece4952-6ea2-70e6-2b7d-3c2d4dbe8287@suse.com> Message-ID: Yes Andreas, whatever deployment scripts we will be pushing it will be under apache license. [image: logo] *Shyam Biradar* * Software Engineer | DevOps* M +91 8600266938 | shyam.biradar at trilio.io | trilio.io On Wed, Sep 12, 2018 at 5:24 PM, Andreas Jaeger wrote: > On 2018-09-12 13:21, Shyam Biradar wrote: > >> Hi, >> >> We have a proprietary openstack plugin. We want to commit deployment >> scripts like containers and heat templates to upstream in tripleo and kolla >> project but not actual product code. >> >> Is it possible? Or How can we handle this case? Any thoughts are welcome. >> > > It's first a legal question - is everything you are pushing under the > Apache license as the rest of the project that you push to? > > And then a policy of kolla project, so let me tag them > > Andreas > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nür > nberg, > Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Sep 12 12:14:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 12 Sep 2018 06:14:23 -0600 Subject: [openstack-dev] Committing proprietary plugins to OpenStack In-Reply-To: References: Message-ID: <1536754319-sup-7501@lrrr.local> Excerpts from Shyam Biradar's message of 2018-09-12 16:51:29 +0530: > Hi, > > We have a proprietary openstack plugin. We want to commit deployment > scripts like containers and heat templates to upstream in tripleo and kolla > project but not actual product code. > > Is it possible? Or How can we handle this case? Any thoughts are welcome. > > [image: logo] > *Shyam Biradar* * Software Engineer | DevOps* > M +91 8600266938 | shyam.biradar at trilio.io | trilio.io What is your motivation for open sourcing the deployment tools but not the plugin itself? We usually want the things committed upstream to be testable in some way. Do you have the ability to set up third-party CI of some sort for the deployment tools you're talking about? Doug From reedip14 at gmail.com Wed Sep 12 13:00:44 2018 From: reedip14 at gmail.com (reedip banerjee) Date: Wed, 12 Sep 2018 18:30:44 +0530 Subject: [openstack-dev] [openstack][infra]Including Functional Tests in Coverage Message-ID: Hi All, Has anyone ever experimented with including Functional Tests with openstack-tox-cover job? We would like to include Functional tests in the coverage jobs and already have a dsvm-functional job in place. However, openstack-tox-cover is , if I am correct, an infra job which is directly called. Has anyone tried to include the functional tests and all its pre-requisites in the cover job? If so, any pointers would be great -- Thanks and Regards, Reedip Banerjee IRC: reedipb -------------- next part -------------- An HTML attachment was scrubbed... URL: From smonderer at vasonanetworks.com Wed Sep 12 13:18:21 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Wed, 12 Sep 2018 16:18:21 +0300 Subject: [openstack-dev] [tripleo] VFs not configured in SR-IOV role In-Reply-To: References: Message-ID: Adding the following to neutron-sriov.yaml solved the problem OS::TripleO::Services::NeutronSriovHostConfig: ../../puppet/services/neutron-sriov-host-config.yaml On Wed, Sep 12, 2018 at 11:53 AM Samuel Monderer < smonderer at vasonanetworks.com> wrote: > Hi Saravanan, > > I'm using RHOSP13. > The neutron-sriov-agent.yaml is missing "OS::TripleO::Services:: > NeutronSriovHostConfig" > > Regards, > Samuel > > On Fri, Sep 7, 2018 at 1:08 PM Saravanan KR wrote: > >> Not sure which version you are using, but the service >> "OS::TripleO::Services::NeutronSriovHostConfig" is responsible for >> setting up VFs. Check if this service is enabled in the deployment. >> One of the missing place is being fixed - >> https://review.openstack.org/#/c/597985/ >> >> Regards, >> Saravanan KR >> On Tue, Sep 4, 2018 at 8:58 PM Samuel Monderer >> wrote: >> > >> > Hi, >> > >> > Attached is the used to deploy an overcloud with SR-IOV role. >> > The deployment completed successfully but the VFs aren't configured on >> the host. >> > Can anyone have a look at what I missed. >> > >> > Thanks >> > Samuel >> > ____________________________________________________________ >> ______________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Sep 12 14:59:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 12 Sep 2018 08:59:35 -0600 Subject: [openstack-dev] [goals][python3] week 5 update Message-ID: <1536764103-sup-8972@lrrr.local> This is week 5 of the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). == What we learned last week == Lance started a thread to talk about the tox_install.sh that some projects are encountering in their stable branches. http://lists.openstack.org/pipermail/openstack-dev/2018-September/134391.html A few reviewers have asked about why the patches to add zuul settings in stable branches are being proposed directly to those branches instead of being backported. As we describe in the goal, sometimes it would have been possible to backport the settings from master but in some cases projects have different jobs on different branches. The simplest way to ensure we maintained those settings correctly was to prepare a separate patch for each branch and submit it directly. Since these patches do not contain production code, and are only configuring our CI system, the normal stable backport policy doesn't apply. https://governance.openstack.org/tc/goals/stein/python3-first.html#moving-settings-from-project-config A couple of reviwers have asked about why the "queue" settings are not included in the settings being copied into each repository. Because the queue setting ties multiple projects together in the gate pipeline, we need to be careful about how we manage that. We decided to manage the "integrated" queue in project-config because that need to be coordinated between teams, but to allow project teams to add other queue settings in-tree (for example, some projects that are not part of the integrated gate tie their server and client repos together into 1 queue). == Ongoing and Completed Work == All of our teams have started the zuul migration work (and more than half are finished)! +---------------------+------+--------+-------+-----------------------+--------------------+ | Team | Open | Failed | Total | Status | Champion | +---------------------+------+--------+-------+-----------------------+--------------------+ | adjutant | 0 | 0 | 4 | DONE | Doug Hellmann | | barbican | 11 | 6 | 13 | migration in progress | Doug Hellmann | | blazar | 16 | 0 | 16 | migration in progress | Nguyen Hai | | Chef OpenStack | 0 | 0 | 1 | DONE | Doug Hellmann | | cinder | 8 | 6 | 22 | migration in progress | Doug Hellmann | | cloudkitty | 0 | 0 | 17 | DONE | Doug Hellmann | | congress | 0 | 0 | 16 | DONE | Nguyen Hai | | cyborg | 0 | 0 | 9 | DONE | Nguyen Hai | | designate | 0 | 0 | 17 | DONE | Nguyen Hai | | Documentation | 0 | 0 | 12 | DONE | Doug Hellmann | | dragonflow | 1 | 0 | 4 | migration in progress | Nguyen Hai | | ec2-api | 0 | 0 | 7 | DONE | | | freezer | 5 | 2 | 23 | migration in progress | | | glance | 16 | 0 | 16 | migration in progress | Nguyen Hai | | heat | 5 | 3 | 27 | migration in progress | Doug Hellmann | | horizon | 0 | 0 | 8 | DONE | Nguyen Hai | | I18n | 0 | 0 | 2 | DONE | Doug Hellmann | | InteropWG | 0 | 0 | 4 | DONE | Doug Hellmann | | ironic | 15 | 3 | 60 | migration in progress | Doug Hellmann | | karbor | 15 | 0 | 15 | migration in progress | Nguyen Hai | | keystone | 3 | 1 | 30 | migration in progress | Doug Hellmann | | kolla | 0 | 0 | 8 | DONE | | | kuryr | 5 | 4 | 16 | migration in progress | Doug Hellmann | | magnum | 7 | 2 | 17 | migration in progress | | | manila | 6 | 3 | 19 | migration in progress | Goutham Pacha Ravi | | masakari | 16 | 0 | 18 | migration in progress | Nguyen Hai | | mistral | 0 | 0 | 25 | DONE | Nguyen Hai | | monasca | 3 | 3 | 66 | migration in progress | Doug Hellmann | | murano | 0 | 0 | 25 | DONE | | | neutron | 31 | 19 | 74 | migration in progress | Doug Hellmann | | nova | 15 | 0 | 23 | migration in progress | | | octavia | 0 | 0 | 23 | DONE | Nguyen Hai | | OpenStack Charms | 20 | 16 | 118 | migration in progress | Doug Hellmann | | OpenStack-Helm | 0 | 0 | 2 | DONE | | | OpenStackAnsible | 34 | 20 | 270 | migration in progress | | | OpenStackClient | 0 | 0 | 16 | DONE | | | OpenStackSDK | 12 | 1 | 15 | migration in progress | | | oslo | 0 | 0 | 157 | DONE | Doug Hellmann | | Packaging-rpm | 0 | 0 | 3 | DONE | Doug Hellmann | | PowerVMStackers | 0 | 0 | 15 | DONE | Doug Hellmann | | Puppet OpenStack | 1 | 0 | 193 | migration in progress | Doug Hellmann | | qinling | 0 | 0 | 6 | DONE | | | Quality Assurance | 6 | 0 | 28 | migration in progress | Doug Hellmann | | rally | 0 | 0 | 2 | DONE | Nguyen Hai | | Release Management | 0 | 0 | 1 | DONE | Doug Hellmann | | requirements | 0 | 0 | 5 | DONE | Doug Hellmann | | sahara | 0 | 0 | 27 | DONE | Doug Hellmann | | searchlight | 0 | 0 | 13 | DONE | Nguyen Hai | | senlin | 0 | 0 | 16 | DONE | Nguyen Hai | | SIGs | 0 | 0 | 6 | DONE | Doug Hellmann | | solum | 0 | 0 | 17 | DONE | Nguyen Hai | | storlets | 0 | 0 | 5 | DONE | | | swift | 0 | 0 | 11 | DONE | Nguyen Hai | | tacker | 4 | 0 | 16 | migration in progress | Nguyen Hai | | Technical Committee | 0 | 0 | 5 | DONE | Doug Hellmann | | Telemetry | 15 | 6 | 31 | migration in progress | Doug Hellmann | | tricircle | 2 | 2 | 9 | migration in progress | Nguyen Hai | | tripleo | 58 | 29 | 134 | migration in progress | Doug Hellmann | | trove | 17 | 2 | 17 | migration in progress | Doug Hellmann | | User Committee | 0 | 0 | 4 | waiting for cleanup | Doug Hellmann | | vitrage | 0 | 0 | 17 | DONE | Nguyen Hai | | watcher | 0 | 0 | 17 | DONE | Nguyen Hai | | winstackers | 0 | 0 | 11 | DONE | | | zaqar | 12 | 4 | 17 | migration in progress | | | zun | 0 | 0 | 13 | DONE | Nguyen Hai | | TOTAL | 359 | 132 | 1855 | 36/65 DONE | | +---------------------+------+--------+-------+-----------------------+--------------------+ == Next Steps == If your team is ready to have your zuul settings migrated, please let us know by following up to this email. We will start with the volunteers, and then work our way through the other teams. == How can you help? == 1. Choose a patch that has failing tests and help fix it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) 2. Review the patches for the zuul changes. Keep in mind that some of those patches will be on the stable branches for projects. 3. Work on adding functional test jobs that run under Python 3. == How can you ask for help? == If you have any questions, please post them here to the openstack-dev list with the topic tag [python3] in the subject line. Posting questions to the mailing list will give the widest audience the chance to see the answers. We are using the #openstack-dev IRC channel for discussion as well, but I'm not sure how good our timezone coverage is so it's probably better to use the mailing list. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 From frode.nordahl at canonical.com Wed Sep 12 15:24:06 2018 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Wed, 12 Sep 2018 09:24:06 -0600 Subject: [openstack-dev] [charms] Propose Felipe Reyes for OpenStack Charmers team In-Reply-To: References: <5157f326-5422-6a76-efcd-a80439e5d778@gmail.com> <397c2a43-42b5-dd92-81cf-120532aa27a4@canonical.com> Message-ID: Core membership application approved, welcome aboard Felipe! FTR; I have also gathered a off-list +1 from David Ames On Wed, Sep 12, 2018 at 4:04 AM Alex Kavanagh wrote: > +1 from me too. > > On Wed, Sep 12, 2018 at 7:29 AM, Liam Young > wrote: > >> +1 and thanks for all your contributions Felipe! >> >> On Wed, Sep 12, 2018 at 6:51 AM Chris MacNaughton < >> chris.macnaughton at canonical.com> wrote: >> >>> +1 Felipe has been a solid contributor to the Openstack Charms for some >>> time now. >>> >>> Chris >>> >>> On 11-09-18 23:07, Ryan Beisner wrote: >>> >>> +1 I'm always happy to see Felipe's contributions and fixes come >>> through. >>> >>> Cheers! >>> >>> Ryan >>> >>> >>> >>> >>> On Tue, Sep 11, 2018 at 1:10 PM James Page >>> wrote: >>> >>>> +1 >>>> >>>> On Wed, 5 Sep 2018 at 15:48 Billy Olsen wrote: >>>> >>>>> Hi, >>>>> >>>>> I'd like to propose Felipe Reyes to join the OpenStack Charmers team as >>>>> a core member. Over the past couple of years Felipe has contributed >>>>> numerous patches and reviews to the OpenStack charms [0]. His >>>>> experience >>>>> and knowledge of the charms used in OpenStack and the usage of Juju >>>>> make >>>>> him a great candidate. >>>>> >>>>> [0] - >>>>> >>>>> https://review.openstack.org/#/q/owner:%22Felipe+Reyes+%253Cfelipe.reyes%2540canonical.com%253E%22 >>>>> >>>>> Thanks, >>>>> >>>>> Billy Olsen >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Alex Kavanagh - Software Engineer > Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Frode Nordahl Software Engineer Canonical Ltd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Sep 12 15:47:27 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 12 Sep 2018 09:47:27 -0600 Subject: [openstack-dev] Open letter/request to TC candidates (and existing elected officials) Message-ID: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> Rather than take a tangent on Kristi's candidacy thread [1], I'll bring this up separately. Kristi said: "Ultimately, this list isn’t exclusive and I’d love to hear your and other people's opinions about what you think the I should focus on." Well since you asked... Some feedback I gave to the public cloud work group yesterday was to get their RFE/bug list ranked from the operator community (because some of the requests are not exclusive to public cloud), and then put pressure on the TC to help project manage the delivery of the top issue. I would like all of the SIGs to do this. The upgrades SIG should rank and socialize their #1 issue that needs attention from the developer community - maybe that's better upgrade CI testing for deployment projects, maybe it's getting the pre-upgrade checks goal done for Stein. The UC should also be doing this; maybe that's the UC saying, "we need help on closing feature gaps in openstack client and/or the SDK". I don't want SIGs to bombard the developers with *all* of their requirements, but I want to get past *talking* about the *same* issues *every* time we get together. I want each group to say, "this is our top issue and we want developers to focus on it." For example, the extended maintenance resolution [2] was purely birthed from frustration about talking about LTS and stable branch EOL every time we get together. It's also the responsibility of the operator and user communities to weigh in on proposed release goals, but the TC should be actively trying to get feedback from those communities about proposed goals, because I bet operators and users don't care about mox removal [3]. I want to see the TC be more of a cross-project project management group, like a group of Ildikos and what she did between nova and cinder to get volume multi-attach done, which took persistent supervision to herd the cats and get it delivered. Lance is already trying to do this with unified limits. Doug is doing this with the python3 goal. I want my elected TC members to be pushing tangible technical deliverables forward. I don't find any value in the TC debating ad nauseam about visions and constellations and "what is openstack?". Scope will change over time depending on who is contributing to openstack, we should just accept this. And we need to realize that if we are failing to deliver value to operators and users, they aren't going to use openstack and then "what is openstack?" won't matter because no one will care. So I encourage all elected TC members to work directly with the various SIGs to figure out their top issue and then work on managing those deliverables across the community because the TC is particularly well suited to do so given the elected position. I realize political and bureaucratic "how should openstack deal with x?" things will come up, but those should not be the priority of the TC. So instead of philosophizing about things like, "should all compute agents be in a single service with a REST API" for hours and hours, every few months - immediately ask, "would doing that get us any closer to achieving top technical priority x?" Because if not, or it's so fuzzy in scope that no one sees the way forward, document a decision and then drop it. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134490.html [2] https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html [3] https://governance.openstack.org/tc/goals/rocky/mox_removal.html -- Thanks, Matt From zhipengh512 at gmail.com Wed Sep 12 15:59:24 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 12 Sep 2018 09:59:24 -0600 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> Message-ID: Well Public Cloud WG has prepared the ammo as you know and to discuss with TC on Friday :) A hundred percent with you on this matter. On Wed, Sep 12, 2018 at 9:47 AM Matt Riedemann wrote: > Rather than take a tangent on Kristi's candidacy thread [1], I'll bring > this up separately. > > Kristi said: > > "Ultimately, this list isn’t exclusive and I’d love to hear your and > other people's opinions about what you think the I should focus on." > > Well since you asked... > > Some feedback I gave to the public cloud work group yesterday was to get > their RFE/bug list ranked from the operator community (because some of > the requests are not exclusive to public cloud), and then put pressure > on the TC to help project manage the delivery of the top issue. I would > like all of the SIGs to do this. The upgrades SIG should rank and > socialize their #1 issue that needs attention from the developer > community - maybe that's better upgrade CI testing for deployment > projects, maybe it's getting the pre-upgrade checks goal done for Stein. > The UC should also be doing this; maybe that's the UC saying, "we need > help on closing feature gaps in openstack client and/or the SDK". I > don't want SIGs to bombard the developers with *all* of their > requirements, but I want to get past *talking* about the *same* issues > *every* time we get together. I want each group to say, "this is our top > issue and we want developers to focus on it." For example, the extended > maintenance resolution [2] was purely birthed from frustration about > talking about LTS and stable branch EOL every time we get together. It's > also the responsibility of the operator and user communities to weigh in > on proposed release goals, but the TC should be actively trying to get > feedback from those communities about proposed goals, because I bet > operators and users don't care about mox removal [3]. > > I want to see the TC be more of a cross-project project management > group, like a group of Ildikos and what she did between nova and cinder > to get volume multi-attach done, which took persistent supervision to > herd the cats and get it delivered. Lance is already trying to do this > with unified limits. Doug is doing this with the python3 goal. I want my > elected TC members to be pushing tangible technical deliverables forward. > > I don't find any value in the TC debating ad nauseam about visions and > constellations and "what is openstack?". Scope will change over time > depending on who is contributing to openstack, we should just accept > this. And we need to realize that if we are failing to deliver value to > operators and users, they aren't going to use openstack and then "what > is openstack?" won't matter because no one will care. > > So I encourage all elected TC members to work directly with the various > SIGs to figure out their top issue and then work on managing those > deliverables across the community because the TC is particularly well > suited to do so given the elected position. I realize political and > bureaucratic "how should openstack deal with x?" things will come up, > but those should not be the priority of the TC. So instead of > philosophizing about things like, "should all compute agents be in a > single service with a REST API" for hours and hours, every few months - > immediately ask, "would doing that get us any closer to achieving top > technical priority x?" Because if not, or it's so fuzzy in scope that no > one sees the way forward, document a decision and then drop it. > > [1] > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134490.html > [2] > > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html > [3] https://governance.openstack.org/tc/goals/rocky/mox_removal.html > > -- > > Thanks, > > Matt > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Sep 12 16:52:42 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 12 Sep 2018 10:52:42 -0600 Subject: [openstack-dev] [os-upstream-institute] Team lunch at the PTG next week - ACTION NEEDED In-Reply-To: <165c034d94f.df9c64a0125121.1658777268767680518@ghanshyammann.com> References: <71A76D76-D780-41A1-9CB4-C63757F4B90E@gmail.com> <165c034d94f.df9c64a0125121.1658777268767680518@ghanshyammann.com> Message-ID: <5C30B77C-2140-40BD-9A1F-A8995A1282E1@gmail.com> Hi, Wednesday is already here so let’s meet for lunch today! We will have a table reserved for us in the lunch room (Ballroom D), let’s meet there so we can catch up a little before the Berlin Summit. :) Thanks, Ildikó > On 2018. Sep 9., at 15:20, Ghanshyam Mann wrote: > > I am in for Wed lunch meeting. > > -gmann > > ---- On Sat, 08 Sep 2018 07:30:53 +0900 Ildiko Vancsa wrote ---- >> Hi Training Team, >> >> As a couple of us will be at the PTG next week it would be great to get together one of the days maybe for lunch. >> >> Wednesday would work the best for Kendall and me, but we can look into other days as well if it would not work for the majority of people around. >> >> So my questions would be: >> >> * Are you interested in getting together one of the lunch slots during next week? >> >> * Would Wednesday work for you or do you have another preference? >> >> Please drop a response to this thread and we will figure it out by Monday or early next week based on the responses. >> >> Thanks, >> Ildikó >> (IRC: ildikov) >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jim at jimrollenhagen.com Wed Sep 12 17:23:59 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 12 Sep 2018 11:23:59 -0600 Subject: [openstack-dev] [goals][python3] mixed versions? Message-ID: The process of operators upgrading Python versions across their fleet came up this morning. It's fairly obvious that operators will want to do this in a rolling fashion. Has anyone considered doing this in CI? For example, running multinode grenade with python 2 on one node and python 3 on the other node. Should we (openstack) test this situation, or even care? // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Sep 12 17:44:55 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 12 Sep 2018 10:44:55 -0700 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: References: Message-ID: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> On Wed, Sep 12, 2018, at 10:23 AM, Jim Rollenhagen wrote: > The process of operators upgrading Python versions across their fleet came > up this morning. It's fairly obvious that operators will want to do this in > a rolling fashion. > > Has anyone considered doing this in CI? For example, running multinode > grenade with python 2 on one node and python 3 on the other node. > > Should we (openstack) test this situation, or even care? > This came up in a Vancouver summit session (the python3 one I think). General consensus there seemed to be that we should have grenade jobs that run python2 on the old side and python3 on the new side and test the update from one to another through a release that way. Additionally there was thought that the nova partial job (and similar grenade jobs) could hold the non upgraded node on python2 and that would talk to a python3 control plane. I haven't seen or heard of anyone working on this yet though. Clark From johnsomor at gmail.com Wed Sep 12 17:58:13 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 12 Sep 2018 11:58:13 -0600 Subject: [openstack-dev] [openstack][infra]Including Functional Tests in Coverage In-Reply-To: References: Message-ID: We do this in Octavia. The openstack-tox-cover calls the cover environment in tox.ini, so you can add it there. Check out the tox.ini in the openstack/octavia project. Michael On Wed, Sep 12, 2018 at 7:01 AM reedip banerjee wrote: > > Hi All, > > Has anyone ever experimented with including Functional Tests with openstack-tox-cover job? > We would like to include Functional tests in the coverage jobs and already have a dsvm-functional job in place. However, openstack-tox-cover is , if I am correct, an infra job which is directly called. > > Has anyone tried to include the functional tests and all its pre-requisites in the cover job? If so, any pointers would be great > > > -- > Thanks and Regards, > Reedip Banerjee > IRC: reedipb > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Wed Sep 12 18:04:02 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 12 Sep 2018 12:04:02 -0600 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> Message-ID: <1536775296-sup-6148@lrrr.local> Excerpts from Clark Boylan's message of 2018-09-12 10:44:55 -0700: > On Wed, Sep 12, 2018, at 10:23 AM, Jim Rollenhagen wrote: > > The process of operators upgrading Python versions across their fleet came > > up this morning. It's fairly obvious that operators will want to do this in > > a rolling fashion. > > > > Has anyone considered doing this in CI? For example, running multinode > > grenade with python 2 on one node and python 3 on the other node. > > > > Should we (openstack) test this situation, or even care? > > > > This came up in a Vancouver summit session (the python3 one I think). General consensus there seemed to be that we should have grenade jobs that run python2 on the old side and python3 on the new side and test the update from one to another through a release that way. Additionally there was thought that the nova partial job (and similar grenade jobs) could hold the non upgraded node on python2 and that would talk to a python3 control plane. > > I haven't seen or heard of anyone working on this yet though. > > Clark > IIRC, we also talked about not supporting multiple versions of python on a given node, so all of the services on a node would need to be upgraded together. Doug From thierry at openstack.org Wed Sep 12 18:25:47 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 12 Sep 2018 20:25:47 +0200 Subject: [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> Message-ID: <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> Matt Riedemann wrote: > [...] > I want to see the TC be more of a cross-project project management > group, like a group of Ildikos and what she did between nova and cinder > to get volume multi-attach done, which took persistent supervision to > herd the cats and get it delivered. Lance is already trying to do this > with unified limits. Doug is doing this with the python3 goal. I want my > elected TC members to be pushing tangible technical deliverables forward. > > I don't find any value in the TC debating ad nauseam about visions and > constellations and "what is openstack?". Scope will change over time > depending on who is contributing to openstack, we should just accept > this. And we need to realize that if we are failing to deliver value to > operators and users, they aren't going to use openstack and then "what > is openstack?" won't matter because no one will care. > [...] I agree that we generally need more of those cross-project champions, and generally TC members are in a good position to do that kind of work. The TC itself is also in a good position to "bless" those initiatives and give them some amount of priority (with our limited influence). I'm just a bit worried to limit that role to the elected TC members. If we say "it's the role of the TC to do cross-project PM in OpenStack" then we artificially limit the number of people who would sign up to do that kind of work. You mention Ildiko and Lance: they did that line of work without being elected. So I would definitely support having champions to drive SIG cross-project priorities, and use the TC both to support them and as a natural pool of champion candidates -- I would just avoid tying the role to the elected group? -- Thierry Carrez (ttx) From lbragstad at gmail.com Wed Sep 12 18:41:20 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 12 Sep 2018 12:41:20 -0600 Subject: [openstack-dev] [all] Consistent policy names Message-ID: The topic of having consistent policy names has popped up a few times this week. Ultimately, if we are to move forward with this, we'll need a convention. To help with that a little bit I started an etherpad [0] that includes links to policy references, basic conventions *within* that service, and some examples of each. I got through quite a few projects this morning, but there are still a couple left. The idea is to look at what we do today and see what conventions we can come up with to move towards, which should also help us determine how much each convention is going to impact services (e.g. picking a convention that will cause 70% of services to rename policies). Please have a look and we can discuss conventions in this thread. If we come to agreement, I'll start working on some documentation in oslo.policy so that it's somewhat official because starting to renaming policies. [0] https://etherpad.openstack.org/p/consistent-policy-names -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Wed Sep 12 18:52:35 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 12 Sep 2018 18:52:35 +0000 Subject: [openstack-dev] [all] Consistent policy names In-Reply-To: References: Message-ID: So +1 Tim From: Lance Bragstad Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 12 September 2018 at 20:43 To: "OpenStack Development Mailing List (not for usage questions)" , OpenStack Operators Subject: [openstack-dev] [all] Consistent policy names The topic of having consistent policy names has popped up a few times this week. Ultimately, if we are to move forward with this, we'll need a convention. To help with that a little bit I started an etherpad [0] that includes links to policy references, basic conventions *within* that service, and some examples of each. I got through quite a few projects this morning, but there are still a couple left. The idea is to look at what we do today and see what conventions we can come up with to move towards, which should also help us determine how much each convention is going to impact services (e.g. picking a convention that will cause 70% of services to rename policies). Please have a look and we can discuss conventions in this thread. If we come to agreement, I'll start working on some documentation in oslo.policy so that it's somewhat official because starting to renaming policies. [0] https://etherpad.openstack.org/p/consistent-policy-names -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Wed Sep 12 19:39:58 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 12 Sep 2018 22:39:58 +0300 Subject: [openstack-dev] [horizon][plugins] Development environment for Horizon plugins Message-ID: Hi team, Some time ago we've found an issue that there is no simple way to test Horizon plugin locally with the current version of Horizon [1], [2]. It leads to the situation when plugins developers use the latest released Horizon version instead of the latest master during new features development. This issue is not reproduced on CI because Zuul does a great job here. We discussed this issue at the PTG [2] (line #163) and decided to go forward with a such [3] solution for now instead of a custom solution in each plugin like [4]. If anybody has any other ideas or concerns, please let me know. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128310.html [2] https://etherpad.openstack.org/p/horizon-ptg-planning-denver-2018 [3] https://review.openstack.org/#/c/601836/ [4] https://github.com/openstack/magnum-ui/blob/master/tox.ini#L23 Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Sep 12 19:51:16 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 12 Sep 2018 13:51:16 -0600 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-31 update In-Reply-To: References: <84f4ab63-5790-1ba8-7be2-eadc98f3b3ae@gmail.com> Message-ID: On 09/04/2018 06:49 PM, Matt Riedemann wrote: > On 9/4/2018 6:39 PM, Ben Nemec wrote: >> Would it be helpful to factor some of the common code out into an Oslo >> library so projects basically just have to subclass, implement check >> functions, and add them to the _upgrade_checks dict? It's not a huge >> amount of code, but a bunch of it seems like it would need to be >> copy-pasted into every project. I have a tentative topic on the Oslo >> PTG schedule for this but figured I should check if it's something we >> even want to pursue. > > Yeah I'm not opposed to trying to pull the nova stuff into a common > library for easier consumption in other projects, I just haven't devoted > the time for it, nor will I probably have the time to do it. If others > are willing to investigate that it would be great. > Okay, here's a first shot at such a library: https://github.com/cybertron/oslo.upgradecheck Lots of rough edges that would need to be addressed before it could be released, but it demonstrates the basic idea I had in mind for this. The upgradecheck module contains the common code, and __main__.py is a demo use of the code. -Ben From doug at doughellmann.com Wed Sep 12 19:57:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 12 Sep 2018 13:57:23 -0600 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-31 update In-Reply-To: References: <84f4ab63-5790-1ba8-7be2-eadc98f3b3ae@gmail.com> Message-ID: <1536782190-sup-6197@lrrr.local> Excerpts from Ben Nemec's message of 2018-09-12 13:51:16 -0600: > > On 09/04/2018 06:49 PM, Matt Riedemann wrote: > > On 9/4/2018 6:39 PM, Ben Nemec wrote: > >> Would it be helpful to factor some of the common code out into an Oslo > >> library so projects basically just have to subclass, implement check > >> functions, and add them to the _upgrade_checks dict? It's not a huge > >> amount of code, but a bunch of it seems like it would need to be > >> copy-pasted into every project. I have a tentative topic on the Oslo > >> PTG schedule for this but figured I should check if it's something we > >> even want to pursue. > > > > Yeah I'm not opposed to trying to pull the nova stuff into a common > > library for easier consumption in other projects, I just haven't devoted > > the time for it, nor will I probably have the time to do it. If others > > are willing to investigate that it would be great. > > > > Okay, here's a first shot at such a library: > https://github.com/cybertron/oslo.upgradecheck > > Lots of rough edges that would need to be addressed before it could be > released, but it demonstrates the basic idea I had in mind for this. The > upgradecheck module contains the common code, and __main__.py is a demo > use of the code. > > -Ben > Nice! Let's get that imported into gerrit and keep iterating on it to make it usable for the goal. Maybe we can get one of the projects most interested in working on this goal early to help with testing and UX feedback. Doug From zbitter at redhat.com Wed Sep 12 20:25:41 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 12 Sep 2018 14:25:41 -0600 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-31 update In-Reply-To: References: <84f4ab63-5790-1ba8-7be2-eadc98f3b3ae@gmail.com> Message-ID: <1259af5c-ad4c-b5a1-7728-a53bec8f8982@redhat.com> On 4/09/18 5:39 PM, Ben Nemec wrote: > Would it be helpful to factor some of the common code out into an Oslo > library so projects basically just have to subclass, implement check > functions, and add them to the _upgrade_checks dict? It's not a huge > amount of code, but a bunch of it seems like it would need to be > copy-pasted into every project. I have a tentative topic on the Oslo PTG > schedule for this but figured I should check if it's something we even > want to pursue. +1. We started discussing this today and immediately realised it was going to result in every project copy/pasting the code to create a -status executable and thae upgrade-check command itself. It would be great if we can avoid this from the start. > On 09/04/2018 04:29 PM, Matt Riedemann wrote: >> Just a few updates this week. >> >> 1. The story is now populated with a task per project that may have >> something to complete for this goal [1]. PTLs, or their liaison(s), >> should assign the task for their project to whomever is going to work >> on the goal. The goal document in governance is being updated with the >> appropriate links to storyboard [2]. >> >> 2. While populating the story and determining which projects to omit >> (like infra, docs, QA were obvious), I left in the deployment projects >> but those likely can/should opt-out of this goal for Stein since the >> goal is more focused on service projects like keystone/cinder/glance. >> I have pushed a docs updated to the goal with respect to deployment >> projects [3]. For deployment projects that don't plan on doing >> anything with this goal, feel free to just invalidate the task in >> storyboard for your project. >> >> 3. I have a developer/contributor reference docs patch up for review >> in nova [4] which is hopefully written generically enough that it can >> be consumed by and used as a guide for other projects implementing >> these upgrade checks. >> >> 4. I've proposed an amendment to the completion criteria for the goal >> [5] saying that projects with the "supports-upgrade" tag should >> integrate the checks from their project with their upgrade CI testing >> job. That could be grenade or some other upgrade testing framework, >> but it stands to reason that a project which claims to support >> upgrades and has automated checks for upgrades, should be running >> those in their CI. >> >> Let me know if there are any questions. There will also be some time >> during a PTG lunch-and-learn session where I'll go over this goal at a >> high level, so feel free to ask questions during or after that at the >> PTG as well. >> >> [1] https://storyboard.openstack.org/#!/story/2003657 >> [2] https://review.openstack.org/#/c/599759/ >> [3] https://review.openstack.org/#/c/599835/ >> [4] https://review.openstack.org/#/c/596902/ >> [5] https://review.openstack.org/#/c/599849/ >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Wed Sep 12 20:28:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 12 Sep 2018 14:28:29 -0600 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: <1536775296-sup-6148@lrrr.local> References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> Message-ID: <1536783914-sup-2738@lrrr.local> Excerpts from Doug Hellmann's message of 2018-09-12 12:04:02 -0600: > Excerpts from Clark Boylan's message of 2018-09-12 10:44:55 -0700: > > On Wed, Sep 12, 2018, at 10:23 AM, Jim Rollenhagen wrote: > > > The process of operators upgrading Python versions across their fleet came > > > up this morning. It's fairly obvious that operators will want to do this in > > > a rolling fashion. > > > > > > Has anyone considered doing this in CI? For example, running multinode > > > grenade with python 2 on one node and python 3 on the other node. > > > > > > Should we (openstack) test this situation, or even care? > > > > > > > This came up in a Vancouver summit session (the python3 one I think). General consensus there seemed to be that we should have grenade jobs that run python2 on the old side and python3 on the new side and test the update from one to another through a release that way. Additionally there was thought that the nova partial job (and similar grenade jobs) could hold the non upgraded node on python2 and that would talk to a python3 control plane. > > > > I haven't seen or heard of anyone working on this yet though. > > > > Clark > > > > IIRC, we also talked about not supporting multiple versions of > python on a given node, so all of the services on a node would need > to be upgraded together. > > Doug I spent a little time talking with the QA team about setting up this job, and Attila pointed out that we should think about what exactly we think would break during a 2-to-3 in-place upgrade like this. Keeping in mind that we are still testing initial installation under both versions and upgrades under python 2, do we have any specific concerns about the python *version* causing upgrade issues? Doug From emccormick at cirrusseven.com Wed Sep 12 21:07:19 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 12 Sep 2018 15:07:19 -0600 Subject: [openstack-dev] Ops Forum Session Brainstorming Message-ID: Hello everyone, I have set up an etherpad to collect Ops related session ideas for the Forum at the Berlin Summit. Please suggest any topics that you would like to see covered, and +1 existing topics you like. https://etherpad.openstack.org/p/ops-forum-stein Cheers, Erik From dms at danplanet.com Wed Sep 12 21:30:12 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 12 Sep 2018 14:30:12 -0700 Subject: [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> (Thierry Carrez's message of "Wed, 12 Sep 2018 20:25:47 +0200") References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> Message-ID: > I'm just a bit worried to limit that role to the elected TC members. If > we say "it's the role of the TC to do cross-project PM in OpenStack" > then we artificially limit the number of people who would sign up to do > that kind of work. You mention Ildiko and Lance: they did that line of > work without being elected. Why would saying that we _expect_ the TC members to do that work limit such activities only to those that are on the TC? I would expect the TC to take on the less-fun or often-neglected efforts that we all know are needed but don't have an obvious champion or sponsor. I think we expect some amount of widely-focused technical or project leadership from TC members, and certainly that expectation doesn't prevent others from leading efforts (even in the areas of proposing TC resolutions, etc) right? --Dan From davanum at gmail.com Wed Sep 12 21:41:45 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Wed, 12 Sep 2018 15:41:45 -0600 Subject: [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> Message-ID: On Wed, Sep 12, 2018 at 3:30 PM Dan Smith wrote: > > I'm just a bit worried to limit that role to the elected TC members. If > > we say "it's the role of the TC to do cross-project PM in OpenStack" > > then we artificially limit the number of people who would sign up to do > > that kind of work. You mention Ildiko and Lance: they did that line of > > work without being elected. > > Why would saying that we _expect_ the TC members to do that work limit > such activities only to those that are on the TC? I would expect the TC > to take on the less-fun or often-neglected efforts that we all know are > needed but don't have an obvious champion or sponsor. > > I think we expect some amount of widely-focused technical or project > leadership from TC members, and certainly that expectation doesn't > prevent others from leading efforts (even in the areas of proposing TC > resolutions, etc) right? > +1 Dan! > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Sep 12 21:55:28 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 12 Sep 2018 21:55:28 +0000 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> Message-ID: <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> On 2018-09-12 09:47:27 -0600 (-0600), Matt Riedemann wrote: [...] > So I encourage all elected TC members to work directly with the > various SIGs to figure out their top issue and then work on > managing those deliverables across the community because the TC is > particularly well suited to do so given the elected position. [...] I almost agree with you. I think the OpenStack TC members should be actively engaged in recruiting and enabling interested people in the community to do those things, but I don't think such work should be solely the domain of the TC and would hate to give the impression that you must be on the TC to have such an impact. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zhipengh512 at gmail.com Wed Sep 12 22:03:12 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 12 Sep 2018 16:03:12 -0600 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> Message-ID: On Wed, Sep 12, 2018 at 3:55 PM Jeremy Stanley wrote: > On 2018-09-12 09:47:27 -0600 (-0600), Matt Riedemann wrote: > [...] > > So I encourage all elected TC members to work directly with the > > various SIGs to figure out their top issue and then work on > > managing those deliverables across the community because the TC is > > particularly well suited to do so given the elected position. > [...] > > I almost agree with you. I think the OpenStack TC members should be > actively engaged in recruiting and enabling interested people in the > community to do those things, but I don't think such work should be > solely the domain of the TC and would hate to give the impression > that you must be on the TC to have such an impact. > -- > Jeremy Stanley > Jeremy, this is not to say that one must be on the TC to have such an impact, it is that TC has the duty more than anyone else to get this specific cross-project goal done. I would even argue it is not the job description of TC to enable/recruit, but to just do it. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Sep 12 22:09:49 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 12 Sep 2018 16:09:49 -0600 Subject: [openstack-dev] [doc][i18n][infra][tc] plan for PDF and translation builds for documentation Message-ID: <1536789967-sup-82@lrrr.local> Ian has been working for a while on enabling PDF and translation support for our documentation build job [1][2]. After exploring a few different approaches, today at the PTG I think we were able to agree on a plan that will let us move ahead. The tl;dr version is that we want to add some extra steps to the existing openstack-tox-docs job (or make a new job that includes those steps and change the PTI project template so projects start using it transparently). The changes to the job will be made so that if the PDF and translation builds work the results will be published and if they fail that will not trigger a job failure. The longer version is that we want to continue to use the existing tox environment in each project as the basis for the job, since that allows teams to control the version of python used, the dependencies installed, and add custom steps to their build (such as for pre-processing the documentation). So, the new or updated job will start by running "tox -e docs" as it does today. Then it will run Sphinx again with the instructions to build PDF output, and copy the results into the directory that the publish job will use to sync to the web server. And then it will run the scripts to build translated versions of the documentation as HTML, and copy the results into place for publishing. There are a few settings that can/should be configured via the Sphinx conf.py file, but rather than trying to push updates into all of the project repos we will look into passing the options on the command line or making the openstackdocstheme inject those settings. This would apply to the setting to control the latex processor as well as fonts or other settings that control the output. Those things all relate to the format of the output, so it seems appropriate to have the theme control them. To cut down on any delays caused by introducing several consecutive sphinx-build runs to the documentation job we plan to have the check and gate jobs only run the translation portion of the build if the message catalog files for a language are modified. Since this work will all happen inside the documentation build job, and it will be enabled for all teams automatically, we do not need to update the Project Testing Interface, and Ian can abandon his governance changes. Monty is going to work on the implementation with Ian using openstacksdk as a test bed [3]. As usual, please let me know if I've left out or mistaken any details. Doug [1] https://review.openstack.org/572559 [2] https://review.openstack.org/588110 [3] https://review.openstack.org/601659 From fungi at yuggoth.org Wed Sep 12 22:14:17 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 12 Sep 2018 22:14:17 +0000 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> Message-ID: <20180912221417.hxmsq6smyaxvvyqo@yuggoth.org> On 2018-09-12 16:03:12 -0600 (-0600), Zhipeng Huang wrote: > On Wed, Sep 12, 2018 at 3:55 PM Jeremy Stanley wrote: > > On 2018-09-12 09:47:27 -0600 (-0600), Matt Riedemann wrote: > > [...] > > > So I encourage all elected TC members to work directly with the > > > various SIGs to figure out their top issue and then work on > > > managing those deliverables across the community because the TC is > > > particularly well suited to do so given the elected position. > > [...] > > > > I almost agree with you. I think the OpenStack TC members should be > > actively engaged in recruiting and enabling interested people in the > > community to do those things, but I don't think such work should be > > solely the domain of the TC and would hate to give the impression > > that you must be on the TC to have such an impact. > > Jeremy, this is not to say that one must be on the TC to have such an > impact, it is that TC has the duty more than anyone else to get this > specific cross-project goal done. I would even argue it is not the job > description of TC to enable/recruit, but to just do it. I think Doug's work leading the Python 3 First effort is a great example. He has helped find and enable several other goal champions to collaborate on this. I appreciate the variety of other things Doug already does with his available time and would rather he not stop doing those things to spend all his time acting as a project manager. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From chris.friesen at windriver.com Wed Sep 12 22:52:16 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 12 Sep 2018 16:52:16 -0600 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: <1536775296-sup-6148@lrrr.local> References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> Message-ID: <42116948-1c6d-5501-2df0-42f042d48365@windriver.com> On 9/12/2018 12:04 PM, Doug Hellmann wrote: >> This came up in a Vancouver summit session (the python3 one I think). General consensus there seemed to be that we should have grenade jobs that run python2 on the old side and python3 on the new side and test the update from one to another through a release that way. Additionally there was thought that the nova partial job (and similar grenade jobs) could hold the non upgraded node on python2 and that would talk to a python3 control plane. >> >> I haven't seen or heard of anyone working on this yet though. >> >> Clark >> > > IIRC, we also talked about not supporting multiple versions of > python on a given node, so all of the services on a node would need > to be upgraded together. As I understand it, the various services talk to each other using over-the-wire protocols. Assuming this is correct, why would we need to ensure they are using the same python version? Chris From mriedemos at gmail.com Wed Sep 12 23:00:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 12 Sep 2018 17:00:04 -0600 Subject: [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> Message-ID: On 9/12/2018 3:30 PM, Dan Smith wrote: >> I'm just a bit worried to limit that role to the elected TC members. If >> we say "it's the role of the TC to do cross-project PM in OpenStack" >> then we artificially limit the number of people who would sign up to do >> that kind of work. You mention Ildiko and Lance: they did that line of >> work without being elected. > Why would saying that we_expect_ the TC members to do that work limit > such activities only to those that are on the TC? I would expect the TC > to take on the less-fun or often-neglected efforts that we all know are > needed but don't have an obvious champion or sponsor. > > I think we expect some amount of widely-focused technical or project > leadership from TC members, and certainly that expectation doesn't > prevent others from leading efforts (even in the areas of proposing TC > resolutions, etc) right? Absolutely. I'm not saying the cross-project project management should be restricted to or solely the responsibility of the TC. It's obvious there are people outside of the TC that have already been doing this - and no it's not always elected PTLs either. What I want is elected TC members to prioritize driving technical deliverables to completion based on ranked input from operators/users/SIGs over philosophical debates and politics/bureaucracy and help to complete the technical tasks if possible. -- Thanks, Matt From mriedemos at gmail.com Wed Sep 12 23:01:42 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 12 Sep 2018 17:01:42 -0600 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> Message-ID: <970b673d-be91-f763-86a1-31f5e9ce52a3@gmail.com> On 9/12/2018 3:55 PM, Jeremy Stanley wrote: > I almost agree with you. I think the OpenStack TC members should be > actively engaged in recruiting and enabling interested people in the > community to do those things, but I don't think such work should be > solely the domain of the TC and would hate to give the impression > that you must be on the TC to have such an impact. See my reply to Thierry. This isn't what I'm saying. But I expect the elected TC members to be *much* more *directly* involved in managing and driving hard cross-project technical deliverables. -- Thanks, Matt From mriedemos at gmail.com Wed Sep 12 23:03:10 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 12 Sep 2018 17:03:10 -0600 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <20180912221417.hxmsq6smyaxvvyqo@yuggoth.org> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912221417.hxmsq6smyaxvvyqo@yuggoth.org> Message-ID: On 9/12/2018 4:14 PM, Jeremy Stanley wrote: > I think Doug's work leading the Python 3 First effort is a great > example. He has helped find and enable several other goal champions > to collaborate on this. I appreciate the variety of other things > Doug already does with his available time and would rather he not > stop doing those things to spend all his time acting as a project > manager. I specifically called out what Doug is doing as an example of things I want to see the TC doing. I want more/all TC members doing that. -- Thanks, Matt From lbragstad at gmail.com Wed Sep 12 23:05:17 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 12 Sep 2018 17:05:17 -0600 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> Message-ID: On Wed, Sep 12, 2018 at 3:55 PM Jeremy Stanley wrote: > On 2018-09-12 09:47:27 -0600 (-0600), Matt Riedemann wrote: > [...] > > So I encourage all elected TC members to work directly with the > > various SIGs to figure out their top issue and then work on > > managing those deliverables across the community because the TC is > > particularly well suited to do so given the elected position. > [...] > > I almost agree with you. I think the OpenStack TC members should be > actively engaged in recruiting and enabling interested people in the > community to do those things, but I don't think such work should be > solely the domain of the TC and would hate to give the impression > that you must be on the TC to have such an impact. > I agree that relaying that type of impression would be negative, but I'm not sure this specifically would do that. I think we've been good about letting people step up to drive initiatives without being in an elected position [0]. IMHO, I think the point Matt is making here is more about ensuring sure we have people to do what we've agreed upon, as a community, as being mission critical. Enablement is imperative, but no matter how good we are at it, sometimes we really just needs hands to do the work. [0] Of the six goals agreed upon since we've implemented champions in Queens, five of them have been championed by non-TC members (Chandan championed two, in back-to-back releases). > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Sep 12 23:06:54 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 12 Sep 2018 23:06:54 +0000 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912221417.hxmsq6smyaxvvyqo@yuggoth.org> Message-ID: <20180912230654.5ldabmmtxlusrxep@yuggoth.org> On 2018-09-12 17:03:10 -0600 (-0600), Matt Riedemann wrote: > On 9/12/2018 4:14 PM, Jeremy Stanley wrote: > > I think Doug's work leading the Python 3 First effort is a great > > example. He has helped find and enable several other goal champions > > to collaborate on this. I appreciate the variety of other things > > Doug already does with his available time and would rather he not > > stop doing those things to spend all his time acting as a project > > manager. > > I specifically called out what Doug is doing as an example of > things I want to see the TC doing. I want more/all TC members > doing that. With that I was replying to Zhipeng Huang's message which you have trimmed above, specifically countering the assertion that recruiting others to help with these efforts is a waste of time and that TC members should simply do all the work themselves instead. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Sep 12 23:13:38 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 12 Sep 2018 23:13:38 +0000 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> Message-ID: <20180912231338.f2v5so7jelg3am7y@yuggoth.org> On 2018-09-12 17:05:17 -0600 (-0600), Lance Bragstad wrote: [...] > IMHO, I think the point Matt is making here is more about ensuring > sure we have people to do what we've agreed upon, as a community, > as being mission critical. Enablement is imperative, but no matter > how good we are at it, sometimes we really just needs hands to do > the work. [...] Sure, and I'm saying that instead I think the influence of TC members _can_ be more valuable in finding and helping additional people to do these things rather than doing it all themselves, and it's not just about the limited number of available hours in the day for one person versus many. The successes goal champions experience, the connections they make and the elevated reputation they gain throughout the community during the process of these efforts builds new leaders for us all. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mrhillsman at gmail.com Wed Sep 12 23:32:48 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 12 Sep 2018 18:32:48 -0500 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <20180912231338.f2v5so7jelg3am7y@yuggoth.org> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912231338.f2v5so7jelg3am7y@yuggoth.org> Message-ID: We basically spent the day focusing on two things specific to what you bring up and are in agreement with you regarding action not just talk around feedback and outreach. [1] We wiped the agenda clean, discussed our availability (set reasonable expectations), and revisited how we can be more diligent and successful around these two principles which target your first comment, "...get their RFE/bug list ranked from the operator community (because some of the requests are not exclusive to public cloud), and then put pressure on the TC to help project manage the delivery of the top issue..." I will not get into much detail because again this response is specific to a portion of your email so in keeping with feedback and outreach the UC is making it a point to be intentional. We have already got action items [2] which target the concern you raise. We have agreed to hold each other accountable and adjusted our meeting structure to facilitate being successful. Not that the UC (elected members) are the only ones who can do this but we believe it is our responsibility to; regardless of what anyone else does. The UC is also expected to enlist others and hopefully through our efforts others are encouraged participate and enlist others. [1] https://etherpad.openstack.org/p/uc-stein-ptg [2] https://etherpad.openstack.org/p/UC-Election-Qualifications On Wed, Sep 12, 2018 at 6:13 PM Jeremy Stanley wrote: > On 2018-09-12 17:05:17 -0600 (-0600), Lance Bragstad wrote: > [...] > > IMHO, I think the point Matt is making here is more about ensuring > > sure we have people to do what we've agreed upon, as a community, > > as being mission critical. Enablement is imperative, but no matter > > how good we are at it, sometimes we really just needs hands to do > > the work. > [...] > > Sure, and I'm saying that instead I think the influence of TC > members _can_ be more valuable in finding and helping additional > people to do these things rather than doing it all themselves, and > it's not just about the limited number of available hours in the day > for one person versus many. The successes goal champions experience, > the connections they make and the elevated reputation they gain > throughout the community during the process of these efforts builds > new leaders for us all. > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Sep 12 23:50:30 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 12 Sep 2018 17:50:30 -0600 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <20180912231338.f2v5so7jelg3am7y@yuggoth.org> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912231338.f2v5so7jelg3am7y@yuggoth.org> Message-ID: <9ed16b6f-bc3a-4de3-bbbd-db62ac1ec32d@gmail.com> On 9/12/2018 5:13 PM, Jeremy Stanley wrote: > Sure, and I'm saying that instead I think the influence of TC > members_can_ be more valuable in finding and helping additional > people to do these things rather than doing it all themselves, and > it's not just about the limited number of available hours in the day > for one person versus many. The successes goal champions experience, > the connections they make and the elevated reputation they gain > throughout the community during the process of these efforts builds > new leaders for us all. Again, I'm not saying TC members should be doing all of the work themselves. That's not realistic, especially when critical parts of any major effort are going to involve developers from projects on which none of the TC members are active contributors (e.g. nova). I want to see TC members herd cats, for lack of a better analogy, and help out technically (with code) where possible. Given the repeated mention of how the "help wanted" list continues to not draw in contributors, I think the recruiting role of the TC should take a back seat to actually stepping in and helping work on those items directly. For example, Sean McGinnis is taking an active role in the operators guide and other related docs that continue to be discussed at every face to face event since those docs were dropped from openstack-manuals (in Pike). I think it's fair to say that the people generally elected to the TC are those most visible in the community (it's a popularity contest) and those people are generally the most visible because they have the luxury of working upstream the majority of their time. As such, it's their duty to oversee and spend time working on the hard cross-project technical deliverables that operators and users are asking for, rather than think of an infinite number of ways to try and draw *others* to help work on those gaps. As I think it's the role of a PTL within a given project to have a finger on the pulse of the technical priorities of that project and manage the developers involved (of which the PTL certainly may be one), it's the role of the TC to do the same across openstack as a whole. If a PTL doesn't have the time or willingness to do that within their project, they shouldn't be the PTL. The same goes for TC members IMO. -- Thanks, Matt From mriedemos at gmail.com Wed Sep 12 23:52:06 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 12 Sep 2018 17:52:06 -0600 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912231338.f2v5so7jelg3am7y@yuggoth.org> Message-ID: <6e3b031f-c450-22fd-391c-c71c8ad827cd@gmail.com> On 9/12/2018 5:32 PM, Melvin Hillsman wrote: > We basically spent the day focusing on two things specific to what you > bring up and are in agreement with you regarding action not just talk > around feedback and outreach. [1] > We wiped the agenda clean, discussed our availability (set reasonable > expectations), and revisited how we can be more diligent and successful > around these two principles which target your first comment, "...get > their RFE/bug list ranked from the operator community (because some of > the requests are not exclusive to public cloud), and then put pressure > on the TC to help project manage the delivery of the top issue..." > > I will not get into much detail because again this response is specific > to a portion of your email so in keeping with feedback and outreach the > UC is making it a point to be intentional. We have already got action > items [2] which target the concern you raise. We have agreed to hold > each other accountable and adjusted our meeting structure to facilitate > being successful. > > Not that the UC (elected members) are the only ones who can do this but > we believe it is our responsibility to; regardless of what anyone else > does. The UC is also expected to enlist others and hopefully through our > efforts others are encouraged participate and enlist others. > > [1] https://etherpad.openstack.org/p/uc-stein-ptg > [2] https://etherpad.openstack.org/p/UC-Election-Qualifications Awesome, thank you Melvin and others on the UC. -- Thanks, Matt From mrhillsman at gmail.com Thu Sep 13 02:08:10 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 12 Sep 2018 20:08:10 -0600 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <6e3b031f-c450-22fd-391c-c71c8ad827cd@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912231338.f2v5so7jelg3am7y@yuggoth.org> <6e3b031f-c450-22fd-391c-c71c8ad827cd@gmail.com> Message-ID: You're welcome! -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 On Wed, Sep 12, 2018, 5:52 PM Matt Riedemann wrote: > On 9/12/2018 5:32 PM, Melvin Hillsman wrote: > > We basically spent the day focusing on two things specific to what you > > bring up and are in agreement with you regarding action not just talk > > around feedback and outreach. [1] > > We wiped the agenda clean, discussed our availability (set reasonable > > expectations), and revisited how we can be more diligent and successful > > around these two principles which target your first comment, "...get > > their RFE/bug list ranked from the operator community (because some of > > the requests are not exclusive to public cloud), and then put pressure > > on the TC to help project manage the delivery of the top issue..." > > > > I will not get into much detail because again this response is specific > > to a portion of your email so in keeping with feedback and outreach the > > UC is making it a point to be intentional. We have already got action > > items [2] which target the concern you raise. We have agreed to hold > > each other accountable and adjusted our meeting structure to facilitate > > being successful. > > > > Not that the UC (elected members) are the only ones who can do this but > > we believe it is our responsibility to; regardless of what anyone else > > does. The UC is also expected to enlist others and hopefully through our > > efforts others are encouraged participate and enlist others. > > > > [1] https://etherpad.openstack.org/p/uc-stein-ptg > > [2] https://etherpad.openstack.org/p/UC-Election-Qualifications > > Awesome, thank you Melvin and others on the UC. > > -- > > Thanks, > > Matt > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Thu Sep 13 02:47:46 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 12 Sep 2018 20:47:46 -0600 Subject: [openstack-dev] [neutron][nova] Small bandwidth demo on the PTG In-Reply-To: <1535704165.17206.0@smtp.office365.com> References: <1535619300.3600.5@smtp.office365.com> <4bb21c51-0092-70f3-a535-8fa59adae7ae@gmail.com> <1535704165.17206.0@smtp.office365.com> Message-ID: <1536806866.7148.0@smtp.office365.com> Hi, It seems that the Nova room (Ballroom A) does not have any projection possibilities. In the other hand the Neutron room ( Vail Meeting Room, Atrium Level) does have a projector. So I suggest to move the demo to the Neutron room. Cheers, gibi On Fri, Aug 31, 2018 at 2:29 AM, Balázs Gibizer wrote: > > > On Thu, Aug 30, 2018 at 8:13 PM, melanie witt > wrote: >> On Thu, 30 Aug 2018 12:43:06 -0500, Miguel Lavalle wrote: >>> Gibi, Bence, >>> >>> In fact, I added the demo explicitly to the Neutron PTG agenda from >>> 1:30 to 2, to give it visiblilty >> >> I'm interested in seeing the demo too. Will the demo be shown at the >> Neutron room or the Nova room? Historically, lunch has ended at >> 1:30, so this will be during the same time as the Neutron/Nova >> cross project time. Should we just co-locate together for the demo >> and the session? I expect anyone watching the demo will want to >> participate in the Neutron/Nova session as well. Either room is >> fine by me. >> > > I assume that the nova - neturon cross project session will be in the > nova room, so I propose to have the demo there as well to avoid > unnecessarily moving people around. For us it is totally OK to start > the demo at 1:30. > > Cheers, > gibi > > >> -melanie >> >>> On Thu, Aug 30, 2018 at 3:55 AM, Balázs Gibizer >>> >> > wrote: >>> >>> Hi, >>> >>> Based on the Nova PTG planning etherpad [1] there is a need to >>> talk >>> about the current state of the bandwidth work [2][3]. Bence >>> (rubasov) has already planned to show a small demo to Neutron >>> folks >>> about the current state of the implementation. So Bence and I >>> are >>> wondering about bringing that demo close to the nova - neutron >>> cross >>> project session. That session is currently planned to happen >>> Thursday after lunch. So we are think about showing the demo >>> right >>> before that session starts. It would start 30 minutes before the >>> nova - neutron cross project session. >>> >>> Are Nova folks also interested in seeing such a demo? >>> >>> If you are interested in seeing the demo please drop us a line >>> or >>> ping us in IRC so we know who should we wait for. >>> >>> Cheers, >>> gibi >>> >>> [1] https://etherpad.openstack.org/p/nova-ptg-stein >>> >>> [2] >>> >>> https://specs.openstack.org/openstack/neutron-specs/specs/rocky/minimum-bandwidth-allocation-placement-api.html >>> >>>  >>> [3] >>> >>> https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/bandwidth-resource-provider.html >>> >>>  >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>>  >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>>  >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Thu Sep 13 04:44:28 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 12 Sep 2018 22:44:28 -0600 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: <42116948-1c6d-5501-2df0-42f042d48365@windriver.com> References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> <42116948-1c6d-5501-2df0-42f042d48365@windriver.com> Message-ID: <1536813766-sup-1226@lrrr.local> Excerpts from Chris Friesen's message of 2018-09-12 16:52:16 -0600: > On 9/12/2018 12:04 PM, Doug Hellmann wrote: > > >> This came up in a Vancouver summit session (the python3 one I think). General consensus there seemed to be that we should have grenade jobs that run python2 on the old side and python3 on the new side and test the update from one to another through a release that way. Additionally there was thought that the nova partial job (and similar grenade jobs) could hold the non upgraded node on python2 and that would talk to a python3 control plane. > >> > >> I haven't seen or heard of anyone working on this yet though. > >> > >> Clark > >> > > > > IIRC, we also talked about not supporting multiple versions of > > python on a given node, so all of the services on a node would need > > to be upgraded together. > > As I understand it, the various services talk to each other using > over-the-wire protocols. Assuming this is correct, why would we need to > ensure they are using the same python version? > > Chris > It's more a matter of trying to describe what test scenarios we would run. In this case we were saying that we're not going to try to spend the effort upstream to test a mixed configuration on a single node. Someone downstream might choose to do that, but we didn't see value in it for the community to try to do it upstream. Doug From aj at suse.com Thu Sep 13 06:14:30 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 13 Sep 2018 08:14:30 +0200 Subject: [openstack-dev] [doc][i18n][infra][tc] plan for PDF and translation builds for documentation In-Reply-To: <1536789967-sup-82@lrrr.local> References: <1536789967-sup-82@lrrr.local> Message-ID: I like the plan, thanks! One suggestion below: The translated documents are build for releasenotes already in a similar way as proposed. here we update the index.rst file on the fly to link to all translated versions, see the bottom of e.g. https://docs.openstack.org/releasenotes/openstack-manuals/ I suggest that you look also as part of building how to link to PDFs and translated documents. For reference, the logic for releasenotes translation is here: http://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/build-releasenotes/tasks/main.yaml#n10 I would appreciate if you handle releasenotes the same way as other documents, so if you want to change releasenotes in the end, please do Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From michel at redhat.com Thu Sep 13 07:04:27 2018 From: michel at redhat.com (Michel Peterson) Date: Thu, 13 Sep 2018 10:04:27 +0300 Subject: [openstack-dev] [doc][i18n][infra][tc] plan for PDF and translation builds for documentation In-Reply-To: <1536789967-sup-82@lrrr.local> References: <1536789967-sup-82@lrrr.local> Message-ID: On Thu, Sep 13, 2018 at 1:09 AM, Doug Hellmann wrote: > The longer version is that we want to continue to use the existing > tox environment in each project as the basis for the job, since > that allows teams to control the version of python used, the > dependencies installed, and add custom steps to their build (such > as for pre-processing the documentation). So, the new or updated > job will start by running "tox -e docs" as it does today. Then it > will run Sphinx again with the instructions to build PDF output, > and copy the results into the directory that the publish job will > use to sync to the web server. And then it will run the scripts to > build translated versions of the documentation as HTML, and copy > the results into place for publishing. > Just a question out of curiosity. You mention that we still want to use the docs environment because it allows fine grained control over how the documentation is created. However, as I understand, the PDF output will happen in a more standardized way and outside of that fine grained control, right? That couldn't lead to differences in both documentations? Do we have to even worry about that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From xuanlangjian at gmail.com Thu Sep 13 07:13:57 2018 From: xuanlangjian at gmail.com (x Lyn) Date: Thu, 13 Sep 2018 15:13:57 +0800 Subject: [openstack-dev] [senlin] Nominations to Senlin Core Team In-Reply-To: References: Message-ID: +1 to both, looking forward to their future contribution. > On Sep 11, 2018, at 12:59 AM, Duc Truong wrote: > > Hi Senlin Core Team, > > I would like to nominate 2 new core reviewers for Senlin: > > [1] Jude Cross (jucross at blizzard.com) > [2] Erik Olof Gunnar Andersson (eandersson at blizzard.com) > > Jude has been doing a number of reviews and contributed some important > patches to Senlin during the Rocky cycle that resolved locking > problems. > > Erik has the most number of reviews in Rocky and has contributed high > quality code reviews for some time. > > [1] http://stackalytics.com/?module=senlin-group&metric=marks&release=rocky&user_id=jucross at blizzard.com > [2] http://stackalytics.com/?module=senlin-group&metric=marks&user_id=eandersson&release=rocky > > Voting is open for 7 days. Please reply with your +1 vote in favor or > -1 as a veto vote. > > Regards, > > Duc (dtruong) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From bdobreli at redhat.com Thu Sep 13 09:35:01 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 13 Sep 2018 11:35:01 +0200 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C1847EA@EX10MBOX03.pnnl.gov> References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com> <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C183A4A@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C1847EA@EX10MBOX03.pnnl.gov> Message-ID: <3c8e807f-e523-10b8-b84b-05d672bb0dd0@redhat.com> On 8/27/18 6:38 PM, Fox, Kevin M wrote: > I think in this context, kubelet without all of kubernetes still has the value that it provides an abstraction layer that podmon/paunch is being suggested to handle. > > It does not need the things you mention, network, sidecar, scaleup/down, etc. You can use as little as you want. > > For example, make a pod yaml per container with hostNetwork: true. it will run just like it was on the host then. You can do just one container. no sidecars necessary. Without the apiserver, it can't do scaleup/down even if you wanted to. > > It provides declarative yaml based management of containers, similar to paunch. so you can skip needing that component. That would be a step into the right direction IMO. > > It also already provides crio and docker support via cri. > > It does provide a little bit of orchestration, in that you drive things with declarative yaml. You drop in a yaml file in /etc/kubernetes/manifests, and it will create the container. you delete it, it removes the container. If you change it, it will update the container. and if something goes wrong with the container, it will try and get it back to the requested state automatically. And, it will recover the containers on reboot without help. > > Thanks, > Kevin > > ________________________________________ > From: Sergii Golovatiuk [sgolovat at redhat.com] > Sent: Monday, August 27, 2018 3:46 AM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls > > Hi, > > On Mon, Aug 27, 2018 at 12:16 PM, Rabi Mishra wrote: >> On Mon, Aug 27, 2018 at 3:25 PM, Sergii Golovatiuk >> wrote: >>> >>> Hi, >>> >>> On Mon, Aug 27, 2018 at 5:32 AM, Rabi Mishra wrote: >>>> On Mon, Aug 27, 2018 at 7:31 AM, Steve Baker wrote: >>> Steve mentioned kubectl (kubernetes CLI which communicates with >> >> >> Not sure what he meant. May be I miss something, but not heard of 'kubectl >> standalone', though he might have meant standalone k8s cluster on every node >> as you think. >> >>> >>> kube-api) not kubelet which is only one component of kubernetes. All >>> kubernetes components may be compiled as one binary (hyperkube) which >>> can be used to minimize footprint. Generated ansible for kubelet is >>> not enough as kubelet doesn't have any orchestration logic. >> >> >> What orchestration logic do we've with TripleO atm? AFAIK we've provide >> roles data for service placement across nodes, right? >> I see standalone kubelet as a first step for scheduling openstack services >> with in k8s cluster in the future (may be). > > It's half measure. I don't see any advantages of that move. We should > either adopt whole kubernetes or doesn't use its components at all as > the maintenance cost will be expensive. Using kubelet requires to > resolve networking communication, scale-up/down, sidecar, or inter > services dependencies. > >> >>>>> >>>>> This was a while ago now so this could be worth revisiting in the >>>>> future. >>>>> We'll be making gradual changes, the first of which is using podman to >>>>> manage single containers. However podman has native support for the pod >>>>> format, so I'm hoping we can switch to that once this transition is >>>>> complete. Then evaluating kubectl becomes much easier. >>>>> >>>>>> Question. Rather then writing a middle layer to abstract both >>>>>> container >>>>>> engines, couldn't you just use CRI? CRI is CRI-O's native language, >>>>>> and >>>>>> there is support already for Docker as well. >>>>> >>>>> >>>>> We're not writing a middle layer, we're leveraging one which is already >>>>> there. >>>>> >>>>> CRI-O is a socket interface and podman is a CLI interface that both sit >>>>> on >>>>> top of the exact same Go libraries. At this point, switching to podman >>>>> needs >>>>> a much lower development effort because we're replacing docker CLI >>>>> calls. >>>>> >>>> I see good value in evaluating kubelet standalone and leveraging it's >>>> inbuilt grpc interfaces with cri-o (rather than using podman) as a long >>>> term >>>> strategy, unless we just want to provide an alternative to docker >>>> container >>>> runtime with cri-o. >>> >>> I see no value using kubelet without kubernetes IMHO. >>> >>> >>>> >>>>>> >>>>>> >>>>>> Thanks, >>>>>> Kevin >>>>>> ________________________________________ >>>>>> From: Jay Pipes [jaypipes at gmail.com] >>>>>> Sent: Thursday, August 23, 2018 8:36 AM >>>>>> To: openstack-dev at lists.openstack.org >>>>>> Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for >>>>>> nice >>>>>> API calls >>>>>> >>>>>> Dan, thanks for the details and answers. Appreciated. >>>>>> >>>>>> Best, >>>>>> -jay >>>>>> >>>>>> On 08/23/2018 10:50 AM, Dan Prince wrote: >>>>>>> >>>>>>> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes wrote: >>>>>>>> >>>>>>>> On 08/15/2018 04:01 PM, Emilien Macchi wrote: >>>>>>>>> >>>>>>>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >>>>>>>> > wrote: >>>>>>>>> >>>>>>>>> More seriously here: there is an ongoing effort to converge >>>>>>>>> the >>>>>>>>> tools around containerization within Red Hat, and we, TripleO >>>>>>>>> are >>>>>>>>> interested to continue the containerization of our services >>>>>>>>> (which >>>>>>>>> was initially done with Docker & Docker-Distribution). >>>>>>>>> We're looking at how these containers could be managed by k8s >>>>>>>>> one >>>>>>>>> day but way before that we plan to swap out Docker and join >>>>>>>>> CRI-O >>>>>>>>> efforts, which seem to be using Podman + Buildah (among other >>>>>>>>> things). >>>>>>>>> >>>>>>>>> I guess my wording wasn't the best but Alex explained way better >>>>>>>>> here: >>>>>>>>> >>>>>>>>> >>>>>>>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 >>>>>>>>> >>>>>>>>> If I may have a chance to rephrase, I guess our current intention >>>>>>>>> is >>>>>>>>> to >>>>>>>>> continue our containerization and investigate how we can improve >>>>>>>>> our >>>>>>>>> tooling to better orchestrate the containers. >>>>>>>>> We have a nice interface (openstack/paunch) that allows us to run >>>>>>>>> multiple container backends, and we're currently looking outside of >>>>>>>>> Docker to see how we could solve our current challenges with the >>>>>>>>> new >>>>>>>>> tools. >>>>>>>>> We're looking at CRI-O because it happens to be a project with a >>>>>>>>> great >>>>>>>>> community, focusing on some problems that we, TripleO have been >>>>>>>>> facing >>>>>>>>> since we containerized our services. >>>>>>>>> >>>>>>>>> We're doing all of this in the open, so feel free to ask any >>>>>>>>> question. >>>>>>>> >>>>>>>> I appreciate your response, Emilien, thank you. Alex' responses to >>>>>>>> Jeremy on the #openstack-tc channel were informative, thank you >>>>>>>> Alex. >>>>>>>> >>>>>>>> For now, it *seems* to me that all of the chosen tooling is very Red >>>>>>>> Hat >>>>>>>> centric. Which makes sense to me, considering Triple-O is a Red Hat >>>>>>>> product. >>>>>>> >>>>>>> Perhaps a slight clarification here is needed. "Director" is a Red >>>>>>> Hat >>>>>>> product. TripleO is an upstream project that is now largely driven by >>>>>>> Red Hat and is today marked as single vendor. We welcome others to >>>>>>> contribute to the project upstream just like anybody else. >>>>>>> >>>>>>> And for those who don't know the history the TripleO project was once >>>>>>> multi-vendor as well. So a lot of the abstractions we have in place >>>>>>> could easily be extended to support distro specific implementation >>>>>>> details. (Kind of what I view podman as in the scope of this thread). >>>>>>> >>>>>>>> I don't know how much of the current reinvention of container >>>>>>>> runtimes >>>>>>>> and various tooling around containers is the result of politics. I >>>>>>>> don't >>>>>>>> know how much is the result of certain companies wanting to "own" >>>>>>>> the >>>>>>>> container stack from top to bottom. Or how much is a result of >>>>>>>> technical >>>>>>>> disagreements that simply cannot (or will not) be resolved among >>>>>>>> contributors in the container development ecosystem. >>>>>>>> >>>>>>>> Or is it some combination of the above? I don't know. >>>>>>>> >>>>>>>> What I *do* know is that the current "NIH du jour" mentality >>>>>>>> currently >>>>>>>> playing itself out in the container ecosystem -- reminding me very >>>>>>>> much >>>>>>>> of the Javascript ecosystem -- makes it difficult for any potential >>>>>>>> *consumers* of container libraries, runtimes or applications to be >>>>>>>> confident that any choice they make towards one of the other will be >>>>>>>> the >>>>>>>> *right* choice or even a *possible* choice next year -- or next >>>>>>>> week. >>>>>>>> Perhaps this is why things like openstack/paunch exist -- to give >>>>>>>> you >>>>>>>> options if something doesn't pan out. >>>>>>> >>>>>>> This is exactly why paunch exists. >>>>>>> >>>>>>> Re, the podman thing I look at it as an implementation detail. The >>>>>>> good news is that given it is almost a parity replacement for what we >>>>>>> already use we'll still contribute to the OpenStack community in >>>>>>> similar ways. Ultimately whether you run 'docker run' or 'podman run' >>>>>>> you end up with the same thing as far as the existing TripleO >>>>>>> architecture goes. >>>>>>> >>>>>>> Dan >>>>>>> >>>>>>>> You have a tough job. I wish you all the luck in the world in making >>>>>>>> these decisions and hope politics and internal corporate management >>>>>>>> decisions play as little a role in them as possible. >>>>>>>> >>>>>>>> Best, >>>>>>>> -jay >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> __________________________________________________________________________ >>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>> Unsubscribe: >>>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>>> >>>>>>> >>>>>>> __________________________________________________________________________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: >>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>> >>>>>> >>>>>> __________________________________________________________________________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>>> >>>>>> __________________________________________________________________________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>>> >>>>>> __________________________________________________________________________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>> >>>> -- >>>> Regards, >>>> Rabi Mishra >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> >>> >>> -- >>> Best Regards, >>> Sergii Golovatiuk >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> -- >> Regards, >> Rabi Mishra >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Best Regards, > Sergii Golovatiuk > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From zigo at debian.org Thu Sep 13 10:23:32 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 13 Sep 2018 12:23:32 +0200 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: <42116948-1c6d-5501-2df0-42f042d48365@windriver.com> References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> <42116948-1c6d-5501-2df0-42f042d48365@windriver.com> Message-ID: <4cc7a165-ea99-65b0-d483-617bd62655da@debian.org> On 09/13/2018 12:52 AM, Chris Friesen wrote: > On 9/12/2018 12:04 PM, Doug Hellmann wrote: > >>> This came up in a Vancouver summit session (the python3 one I think). >>> General consensus there seemed to be that we should have grenade jobs >>> that run python2 on the old side and python3 on the new side and test >>> the update from one to another through a release that way. >>> Additionally there was thought that the nova partial job (and similar >>> grenade jobs) could hold the non upgraded node on python2 and that >>> would talk to a python3 control plane. >>> >>> I haven't seen or heard of anyone working on this yet though. >>> >>> Clark >>> >> >> IIRC, we also talked about not supporting multiple versions of >> python on a given node, so all of the services on a node would need >> to be upgraded together. > > As I understand it, the various services talk to each other using > over-the-wire protocols.  Assuming this is correct, why would we need to > ensure they are using the same python version? > > Chris There are indeed a few cases were things can break, especially with character encoding. If you want an example of what may go wrong, here's one with Cinder and Ceph: https://review.openstack.org/568813 Without the encodeutils.safe_decode() call, Cinder over Ceph was just crashing for me in Debian (Debian is full Python 3 now...). In this example, we're just over the wire, and it was supposed to be the same. Yet, only an integration test could have detect it (and I discovered it running puppet-openstack on Debian). Cheers, Thomas Goirand (zigo) From naichuan.sun at citrix.com Thu Sep 13 10:39:56 2018 From: naichuan.sun at citrix.com (Naichuan Sun) Date: Thu, 13 Sep 2018 10:39:56 +0000 Subject: [openstack-dev] About microversion setting to enable nested resource provider Message-ID: <0e33fb6ca6484035bee76197f36b9aae@SINPEX02CL01.citrite.net> Hi, guys, Looks n-rp is disabled by default because microversion matches 1.29 : https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/handlers/allocation_candidate.py#L252 Anyone know how to set the microversion to enable n-rp in placement? Thank you very much. BR. Naichuan Sun -------------- next part -------------- An HTML attachment was scrubbed... URL: From yjf1970231893 at gmail.com Thu Sep 13 11:52:17 2018 From: yjf1970231893 at gmail.com (Jeff Yang) Date: Thu, 13 Sep 2018 19:52:17 +0800 Subject: [openstack-dev] [octavia] Optimize the query of the octavia database Message-ID: Hi, All As octavia resources increase, I found that running the "openstack loadbalancer list" command takes longer and longer. Sometimes a 504 error is reported. By reading the code, I found that octavia will performs complex left outer join queries when acquiring resources such as loadbalancer, listener, pool, etc. in order to only make one trip to the database. Reference code: http://paste.openstack.org/show/730022 Line 133 Generated SQL statements: http://paste.openstack.org/show/730021 So, I suggest that adjust the query strategy to provide different join queries for different resources. https://storyboard.openstack.org/#!/story/2003751 -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Sep 13 13:10:48 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 13 Sep 2018 07:10:48 -0600 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: <4cc7a165-ea99-65b0-d483-617bd62655da@debian.org> References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> <42116948-1c6d-5501-2df0-42f042d48365@windriver.com> <4cc7a165-ea99-65b0-d483-617bd62655da@debian.org> Message-ID: <1536844110-sup-4168@lrrr.local> Excerpts from Thomas Goirand's message of 2018-09-13 12:23:32 +0200: > On 09/13/2018 12:52 AM, Chris Friesen wrote: > > On 9/12/2018 12:04 PM, Doug Hellmann wrote: > > > >>> This came up in a Vancouver summit session (the python3 one I think). > >>> General consensus there seemed to be that we should have grenade jobs > >>> that run python2 on the old side and python3 on the new side and test > >>> the update from one to another through a release that way. > >>> Additionally there was thought that the nova partial job (and similar > >>> grenade jobs) could hold the non upgraded node on python2 and that > >>> would talk to a python3 control plane. > >>> > >>> I haven't seen or heard of anyone working on this yet though. > >>> > >>> Clark > >>> > >> > >> IIRC, we also talked about not supporting multiple versions of > >> python on a given node, so all of the services on a node would need > >> to be upgraded together. > > > > As I understand it, the various services talk to each other using > > over-the-wire protocols.  Assuming this is correct, why would we need to > > ensure they are using the same python version? > > > > Chris > > There are indeed a few cases were things can break, especially with > character encoding. If you want an example of what may go wrong, here's > one with Cinder and Ceph: > > https://review.openstack.org/568813 > > Without the encodeutils.safe_decode() call, Cinder over Ceph was just > crashing for me in Debian (Debian is full Python 3 now...). In this > example, we're just over the wire, and it was supposed to be the same. > Yet, only an integration test could have detect it (and I discovered it > running puppet-openstack on Debian). Was that caused (or found) by first running cinder under python 2 and then upgrading to python 3 on the same host? That's the test case Jim originally suggested and I'm trying to understand if we actually need it. Doug From doug at doughellmann.com Thu Sep 13 13:17:08 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 13 Sep 2018 07:17:08 -0600 Subject: [openstack-dev] [doc][i18n][infra][tc] plan for PDF and translation builds for documentation In-Reply-To: References: <1536789967-sup-82@lrrr.local> Message-ID: <1536844558-sup-5414@lrrr.local> Excerpts from Andreas Jaeger's message of 2018-09-13 08:14:30 +0200: > I like the plan, thanks! One suggestion below: > > The translated documents are build for releasenotes already in a similar > way as proposed. here we update the index.rst file on the fly to link to > all translated versions, see the bottom of e.g. > https://docs.openstack.org/releasenotes/openstack-manuals/ > > I suggest that you look also as part of building how to link to PDFs and > translated documents. > > For reference, the logic for releasenotes translation is here: > http://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/build-releasenotes/tasks/main.yaml#n10 Ah, yes, that's a detail I left out. I want to add a directive to the theme to list those other formats and versions, so we don't have to edit the content of the file on the fly. > I would appreciate if you handle releasenotes the same way as other > documents, so if you want to change releasenotes in the end, please do Good idea. Doug From jaypipes at gmail.com Thu Sep 13 13:19:07 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 13 Sep 2018 09:19:07 -0400 Subject: [openstack-dev] About microversion setting to enable nested resource provider In-Reply-To: <0e33fb6ca6484035bee76197f36b9aae@SINPEX02CL01.citrite.net> References: <0e33fb6ca6484035bee76197f36b9aae@SINPEX02CL01.citrite.net> Message-ID: <7e200b01-4f83-95b4-8efa-8b4897c39da5@gmail.com> On 09/13/2018 06:39 AM, Naichuan Sun wrote: > Hi, guys, > > Looks n-rp is disabled by default because microversion matches 1.29 : > https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/handlers/allocation_candidate.py#L252 > > Anyone know how to set the microversion to enable n-rp in placement? It is the client which must send the 1.29+ placement API microversion header to indicate to the placement API server that the client wants to receive nested provider information in the allocation candidates response. Currently, nova-scheduler calls the scheduler reportclient's get_allocation_candidates() method: https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/manager.py#L138 The scheduler reportclient's get_allocation_candidates() method currently passes the 1.25 placement API microversion header: https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L353 https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L53 In order to get the nested information returned in the allocation candidates response, that would need to be upped to 1.29. Best, -jay From doug at doughellmann.com Thu Sep 13 13:23:53 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 13 Sep 2018 07:23:53 -0600 Subject: [openstack-dev] [doc][i18n][infra][tc] plan for PDF and translation builds for documentation In-Reply-To: References: <1536789967-sup-82@lrrr.local> Message-ID: <1536844638-sup-1991@lrrr.local> Excerpts from Michel Peterson's message of 2018-09-13 10:04:27 +0300: > On Thu, Sep 13, 2018 at 1:09 AM, Doug Hellmann > wrote: > > > The longer version is that we want to continue to use the existing > > tox environment in each project as the basis for the job, since > > that allows teams to control the version of python used, the > > dependencies installed, and add custom steps to their build (such > > as for pre-processing the documentation). So, the new or updated > > job will start by running "tox -e docs" as it does today. Then it > > will run Sphinx again with the instructions to build PDF output, > > and copy the results into the directory that the publish job will > > use to sync to the web server. And then it will run the scripts to > > build translated versions of the documentation as HTML, and copy > > the results into place for publishing. > > > > Just a question out of curiosity. You mention that we still want to use the > docs environment because it allows fine grained control over how the > documentation is created. However, as I understand, the PDF output will > happen in a more standardized way and outside of that fine grained control, > right? That couldn't lead to differences in both documentations? Do we have > to even worry about that? Good question. The idea is to run "tox -e docs" to get the regular HTML, then something like .tox/docs/bin/sphinx-build -b latex doc/build doc/build/latex cd doc/build/latex make cp doc/build/latex/*.pdf doc/build/html We would run the HTML translation builds in a similar way by invoking sphinx-build from the virtualenv repeatedly with different locale settings based on what translations exist. In my earlier comment, I was thinking of the case where a team runs a script to generate rst content files before invoking sphinx to build the HTML. That script would have been run before the PDF generation happens, so the content should be the same. That also applies for anyone using sphinx add-ons, which will be available to the latex builder because we'll be using the version of sphinx installed in the virtualenv managed by tox. Doug From gmann at ghanshyammann.com Thu Sep 13 13:48:08 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 13 Sep 2018 22:48:08 +0900 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: <1536844110-sup-4168@lrrr.local> References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> <42116948-1c6d-5501-2df0-42f042d48365@windriver.com> <4cc7a165-ea99-65b0-d483-617bd62655da@debian.org> <1536844110-sup-4168@lrrr.local> Message-ID: <165d33064a1.d4f2162d86926.8984788972582928220@ghanshyammann.com> ---- On Thu, 13 Sep 2018 22:10:48 +0900 Doug Hellmann wrote ---- > Excerpts from Thomas Goirand's message of 2018-09-13 12:23:32 +0200: > > On 09/13/2018 12:52 AM, Chris Friesen wrote: > > > On 9/12/2018 12:04 PM, Doug Hellmann wrote: > > > > > >>> This came up in a Vancouver summit session (the python3 one I think). > > >>> General consensus there seemed to be that we should have grenade jobs > > >>> that run python2 on the old side and python3 on the new side and test > > >>> the update from one to another through a release that way. > > >>> Additionally there was thought that the nova partial job (and similar > > >>> grenade jobs) could hold the non upgraded node on python2 and that > > >>> would talk to a python3 control plane. > > >>> > > >>> I haven't seen or heard of anyone working on this yet though. > > >>> > > >>> Clark > > >>> > > >> > > >> IIRC, we also talked about not supporting multiple versions of > > >> python on a given node, so all of the services on a node would need > > >> to be upgraded together. > > > > > > As I understand it, the various services talk to each other using > > > over-the-wire protocols. Assuming this is correct, why would we need to > > > ensure they are using the same python version? > > > > > > Chris > > > > There are indeed a few cases were things can break, especially with > > character encoding. If you want an example of what may go wrong, here's > > one with Cinder and Ceph: > > > > https://review.openstack.org/568813 > > > > Without the encodeutils.safe_decode() call, Cinder over Ceph was just > > crashing for me in Debian (Debian is full Python 3 now...). In this > > example, we're just over the wire, and it was supposed to be the same. > > Yet, only an integration test could have detect it (and I discovered it > > running puppet-openstack on Debian). I think that should be detected by py3 ceph job "legacy-tempest-dsvm-py35-full-devstack-plugin-ceph". Was that failing or anyone checked its status during failure. This job is experimental in cinder gate[1] so i could not get its failure data from health-dashboard. May be we should move it to check pipeline to cover cinder+ceph for py3 ? [1] https://github.com/openstack-infra/project-config/blob/4eeec4cc6e18dd8933b16a2ddda75b469b893437/zuul.d/projects.yaml#L3471 -gmann > > Was that caused (or found) by first running cinder under python 2 > and then upgrading to python 3 on the same host? That's the test > case Jim originally suggested and I'm trying to understand if we > actually need it. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Thu Sep 13 14:19:21 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 13 Sep 2018 23:19:21 +0900 Subject: [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> Message-ID: <165d34cf822.b5f4da7688669.7192778226044204749@ghanshyammann.com> ---- On Thu, 13 Sep 2018 00:47:27 +0900 Matt Riedemann wrote ---- > Rather than take a tangent on Kristi's candidacy thread [1], I'll bring > this up separately. > > Kristi said: > > "Ultimately, this list isn’t exclusive and I’d love to hear your and > other people's opinions about what you think the I should focus on." > > Well since you asked... > > Some feedback I gave to the public cloud work group yesterday was to get > their RFE/bug list ranked from the operator community (because some of > the requests are not exclusive to public cloud), and then put pressure > on the TC to help project manage the delivery of the top issue. I would > like all of the SIGs to do this. The upgrades SIG should rank and > socialize their #1 issue that needs attention from the developer > community - maybe that's better upgrade CI testing for deployment > projects, maybe it's getting the pre-upgrade checks goal done for Stein. > The UC should also be doing this; maybe that's the UC saying, "we need > help on closing feature gaps in openstack client and/or the SDK". I > don't want SIGs to bombard the developers with *all* of their > requirements, but I want to get past *talking* about the *same* issues > *every* time we get together. I want each group to say, "this is our top > issue and we want developers to focus on it." For example, the extended > maintenance resolution [2] was purely birthed from frustration about > talking about LTS and stable branch EOL every time we get together. It's > also the responsibility of the operator and user communities to weigh in > on proposed release goals, but the TC should be actively trying to get > feedback from those communities about proposed goals, because I bet > operators and users don't care about mox removal [3]. I agree on this and i feel this is real value we can add with current situation where contributors are less in almost all of the projects. When we set goal for any cycle, we should have user/operator/SIG weightage on priority in selection checklist and categorize the goal into respective category/tag something like "user-oriented" or "coding-oriented"(only developer/maintaining code benefits). And then we concentrate more on first category and leave second one more on project team. Project team further can plan the second catagory items as per their bandwidth and priority. I am not saying code/developer oriented goals should not be initiated by TC but those should be on low priority list kind of. -gmann > > I want to see the TC be more of a cross-project project management > group, like a group of Ildikos and what she did between nova and cinder > to get volume multi-attach done, which took persistent supervision to > herd the cats and get it delivered. Lance is already trying to do this > with unified limits. Doug is doing this with the python3 goal. I want my > elected TC members to be pushing tangible technical deliverables forward. > > I don't find any value in the TC debating ad nauseam about visions and > constellations and "what is openstack?". Scope will change over time > depending on who is contributing to openstack, we should just accept > this. And we need to realize that if we are failing to deliver value to > operators and users, they aren't going to use openstack and then "what > is openstack?" won't matter because no one will care. > > So I encourage all elected TC members to work directly with the various > SIGs to figure out their top issue and then work on managing those > deliverables across the community because the TC is particularly well > suited to do so given the elected position. I realize political and > bureaucratic "how should openstack deal with x?" things will come up, > but those should not be the priority of the TC. So instead of > philosophizing about things like, "should all compute agents be in a > single service with a REST API" for hours and hours, every few months - > immediately ask, "would doing that get us any closer to achieving top > technical priority x?" Because if not, or it's so fuzzy in scope that no > one sees the way forward, document a decision and then drop it. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134490.html > [2] > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html > [3] https://governance.openstack.org/tc/goals/rocky/mox_removal.html > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From naichuan.sun at citrix.com Thu Sep 13 14:31:24 2018 From: naichuan.sun at citrix.com (Naichuan Sun) Date: Thu, 13 Sep 2018 14:31:24 +0000 Subject: [openstack-dev] About microversion setting to enable nested resource provider In-Reply-To: <7e200b01-4f83-95b4-8efa-8b4897c39da5@gmail.com> References: <0e33fb6ca6484035bee76197f36b9aae@SINPEX02CL01.citrite.net> <7e200b01-4f83-95b4-8efa-8b4897c39da5@gmail.com> Message-ID: <90a534cec8ff4957a141af2ed1686934@SINPEX02CL01.citrite.net> Thank you very much, Jay. Is there somewhere I could set microversion(some configure file?), Or just modify the source code to set it? BR. Naichuan Sun -----Original Message----- From: Jay Pipes [mailto:jaypipes at gmail.com] Sent: Thursday, September 13, 2018 9:19 PM To: Naichuan Sun ; OpenStack Development Mailing List (not for usage questions) Cc: melanie witt ; efried at us.ibm.com; Sylvain Bauza Subject: Re: About microversion setting to enable nested resource provider On 09/13/2018 06:39 AM, Naichuan Sun wrote: > Hi, guys, > > Looks n-rp is disabled by default because microversion matches 1.29 : > https://github.com/openstack/nova/blob/master/nova/api/openstack/place > ment/handlers/allocation_candidate.py#L252 > > Anyone know how to set the microversion to enable n-rp in placement? It is the client which must send the 1.29+ placement API microversion header to indicate to the placement API server that the client wants to receive nested provider information in the allocation candidates response. Currently, nova-scheduler calls the scheduler reportclient's get_allocation_candidates() method: https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/manager.py#L138 The scheduler reportclient's get_allocation_candidates() method currently passes the 1.25 placement API microversion header: https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L353 https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L53 In order to get the nested information returned in the allocation candidates response, that would need to be upped to 1.29. Best, -jay From dangtrinhnt at gmail.com Thu Sep 13 14:45:38 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 13 Sep 2018 23:45:38 +0900 Subject: [openstack-dev] [release][searchlight] Need rights to create stable branches and tags Message-ID: Dear Release Management team, As we're reaching the Stein-1 milestone, I would like to prepare the branches and tags. According to the documents, it's the job of the Release Management team but it also says I as the PTL can do it. I wonder which is the best way because Searchlight has missed several milestones. It would be great if anyone in the Release Management team can give me some advice. Best regards, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Sep 13 14:52:06 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 13 Sep 2018 23:52:06 +0900 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> Message-ID: <165d36af283.bec3759c90672.8271445509852215795@ghanshyammann.com> ---- On Thu, 13 Sep 2018 08:05:17 +0900 Lance Bragstad wrote ---- > > > On Wed, Sep 12, 2018 at 3:55 PM Jeremy Stanley wrote: > On 2018-09-12 09:47:27 -0600 (-0600), Matt Riedemann wrote: > [...] > > So I encourage all elected TC members to work directly with the > > various SIGs to figure out their top issue and then work on > > managing those deliverables across the community because the TC is > > particularly well suited to do so given the elected position. > [...] > > I almost agree with you. I think the OpenStack TC members should be > actively engaged in recruiting and enabling interested people in the > community to do those things, but I don't think such work should be > solely the domain of the TC and would hate to give the impression > that you must be on the TC to have such an impact. > > I agree that relaying that type of impression would be negative, but I'm not sure this specifically would do that. I think we've been good about letting people step up to drive initiatives without being in an elected position [0]. > IMHO, I think the point Matt is making here is more about ensuring sure we have people to do what we've agreed upon, as a community, as being mission critical. Enablement is imperative, but no matter how good we are at it, sometimes we really just needs hands to do the work. > [0] Of the six goals agreed upon since we've implemented champions in Queens, five of them have been championed by non-TC members (Chandan championed two, in back-to-back releases). -- True, doing any such cross project work does not or should not require to be TC. And i do not think anyone has objection on this statement. Yes, recruiting the people is the key things here and TC can play the ownership role in this. I am sure having more and more people involved in such cross project work will surly help to find the new leaders. There are lot of contributors, who might have bandwidth but not coming up for cross project help. Such initiate from TC can help them to come forward. And any other cross project work lead by non-TC will always be great example for TC to encourage the other contributors for such activity. But key point here is, if there is no one stepped up for priority cross project work(much needed for openstack production use case) then, TC can play role to find/self owner for that work. -gmann > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at fried.cc Thu Sep 13 15:14:07 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 13 Sep 2018 09:14:07 -0600 Subject: [openstack-dev] About microversion setting to enable nested resource provider In-Reply-To: <90a534cec8ff4957a141af2ed1686934@SINPEX02CL01.citrite.net> References: <0e33fb6ca6484035bee76197f36b9aae@SINPEX02CL01.citrite.net> <7e200b01-4f83-95b4-8efa-8b4897c39da5@gmail.com> <90a534cec8ff4957a141af2ed1686934@SINPEX02CL01.citrite.net> Message-ID: <0acdc7e5-432f-fc99-4ce2-c9df53af1a3b@fried.cc> There's a patch series in progress for this: https://review.openstack.org/#/q/topic:use-nested-allocation-candidates It needs some TLC. I'm sure gibi and tetsuro would welcome some help... efried On 09/13/2018 08:31 AM, Naichuan Sun wrote: > Thank you very much, Jay. > Is there somewhere I could set microversion(some configure file?), Or just modify the source code to set it? > > BR. > Naichuan Sun > > -----Original Message----- > From: Jay Pipes [mailto:jaypipes at gmail.com] > Sent: Thursday, September 13, 2018 9:19 PM > To: Naichuan Sun ; OpenStack Development Mailing List (not for usage questions) > Cc: melanie witt ; efried at us.ibm.com; Sylvain Bauza > Subject: Re: About microversion setting to enable nested resource provider > > On 09/13/2018 06:39 AM, Naichuan Sun wrote: >> Hi, guys, >> >> Looks n-rp is disabled by default because microversion matches 1.29 : >> https://github.com/openstack/nova/blob/master/nova/api/openstack/place >> ment/handlers/allocation_candidate.py#L252 >> >> Anyone know how to set the microversion to enable n-rp in placement? > > It is the client which must send the 1.29+ placement API microversion header to indicate to the placement API server that the client wants to receive nested provider information in the allocation candidates response. > > Currently, nova-scheduler calls the scheduler reportclient's > get_allocation_candidates() method: > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/manager.py#L138 > > The scheduler reportclient's get_allocation_candidates() method currently passes the 1.25 placement API microversion header: > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L353 > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L53 > > In order to get the nested information returned in the allocation candidates response, that would need to be upped to 1.29. > > Best, > -jay > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sbauza at redhat.com Thu Sep 13 15:46:41 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 13 Sep 2018 09:46:41 -0600 Subject: [openstack-dev] About microversion setting to enable nested resource provider In-Reply-To: <0acdc7e5-432f-fc99-4ce2-c9df53af1a3b@fried.cc> References: <0e33fb6ca6484035bee76197f36b9aae@SINPEX02CL01.citrite.net> <7e200b01-4f83-95b4-8efa-8b4897c39da5@gmail.com> <90a534cec8ff4957a141af2ed1686934@SINPEX02CL01.citrite.net> <0acdc7e5-432f-fc99-4ce2-c9df53af1a3b@fried.cc> Message-ID: Hey Naichuan, FWIW, we discussed on the missing pieces for nested resource providers. See the (currently-in-use) etherpad https://etherpad.openstack.org/p/nova-ptg-stein and lookup for "closing the gap on nested resource providers" (L144 while I speak) The fact that we are not able to schedule yet is a critical piece that we said we're going to work on it as soon as we can. -Sylvain On Thu, Sep 13, 2018 at 9:14 AM, Eric Fried wrote: > There's a patch series in progress for this: > > https://review.openstack.org/#/q/topic:use-nested-allocation-candidates > > It needs some TLC. I'm sure gibi and tetsuro would welcome some help... > > efried > > On 09/13/2018 08:31 AM, Naichuan Sun wrote: > > Thank you very much, Jay. > > Is there somewhere I could set microversion(some configure file?), Or > just modify the source code to set it? > > > > BR. > > Naichuan Sun > > > > -----Original Message----- > > From: Jay Pipes [mailto:jaypipes at gmail.com] > > Sent: Thursday, September 13, 2018 9:19 PM > > To: Naichuan Sun ; OpenStack Development > Mailing List (not for usage questions) > > Cc: melanie witt ; efried at us.ibm.com; Sylvain Bauza > > > Subject: Re: About microversion setting to enable nested resource > provider > > > > On 09/13/2018 06:39 AM, Naichuan Sun wrote: > >> Hi, guys, > >> > >> Looks n-rp is disabled by default because microversion matches 1.29 : > >> https://github.com/openstack/nova/blob/master/nova/api/openstack/place > >> ment/handlers/allocation_candidate.py#L252 > >> > >> Anyone know how to set the microversion to enable n-rp in placement? > > > > It is the client which must send the 1.29+ placement API microversion > header to indicate to the placement API server that the client wants to > receive nested provider information in the allocation candidates response. > > > > Currently, nova-scheduler calls the scheduler reportclient's > > get_allocation_candidates() method: > > > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a > 534410b5df/nova/scheduler/manager.py#L138 > > > > The scheduler reportclient's get_allocation_candidates() method > currently passes the 1.25 placement API microversion header: > > > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a > 534410b5df/nova/scheduler/client/report.py#L353 > > > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a > 534410b5df/nova/scheduler/client/report.py#L53 > > > > In order to get the nested information returned in the allocation > candidates response, that would need to be upped to 1.29. > > > > Best, > > -jay > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Sep 13 15:58:19 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 13 Sep 2018 09:58:19 -0600 (MDT) Subject: [openstack-dev] [openstack][infra]Including Functional Tests in Coverage In-Reply-To: References: Message-ID: On Wed, 12 Sep 2018, Michael Johnson wrote: > We do this in Octavia. The openstack-tox-cover calls the cover > environment in tox.ini, so you can add it there. We've got this in progress for placement as well: https://review.openstack.org/#/c/600501/ https://review.openstack.org/#/c/600502/ It works well and is pretty critical in placement because most of the "important" tests are functional. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From Kevin.Fox at pnnl.gov Thu Sep 13 16:14:22 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 13 Sep 2018 16:14:22 +0000 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> , Message-ID: <1A3C52DFCD06494D8528644858247BF01C19A62A@EX10MBOX03.pnnl.gov> How about stated this way, Its the tc's responsibility to get it done. Either by delegating the activity, or by doing it themselves. But either way, it needs to get done. Its a ball that has been dropped too much in OpenStacks history. If no one is ultimately responsible, balls will keep getting dropped. Thanks, Kevin ________________________________________ From: Matt Riedemann [mriedemos at gmail.com] Sent: Wednesday, September 12, 2018 4:00 PM To: Dan Smith; Thierry Carrez Cc: OpenStack Development Mailing List (not for usage questions); openstack-sigs at lists.openstack.org; openstack-operators at lists.openstack.org Subject: Re: [Openstack-sigs] [openstack-dev] Open letter/request to TC candidates (and existing elected officials) On 9/12/2018 3:30 PM, Dan Smith wrote: >> I'm just a bit worried to limit that role to the elected TC members. If >> we say "it's the role of the TC to do cross-project PM in OpenStack" >> then we artificially limit the number of people who would sign up to do >> that kind of work. You mention Ildiko and Lance: they did that line of >> work without being elected. > Why would saying that we_expect_ the TC members to do that work limit > such activities only to those that are on the TC? I would expect the TC > to take on the less-fun or often-neglected efforts that we all know are > needed but don't have an obvious champion or sponsor. > > I think we expect some amount of widely-focused technical or project > leadership from TC members, and certainly that expectation doesn't > prevent others from leading efforts (even in the areas of proposing TC > resolutions, etc) right? Absolutely. I'm not saying the cross-project project management should be restricted to or solely the responsibility of the TC. It's obvious there are people outside of the TC that have already been doing this - and no it's not always elected PTLs either. What I want is elected TC members to prioritize driving technical deliverables to completion based on ranked input from operators/users/SIGs over philosophical debates and politics/bureaucracy and help to complete the technical tasks if possible. -- Thanks, Matt _______________________________________________ openstack-sigs mailing list openstack-sigs at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From eandersson at blizzard.com Thu Sep 13 16:32:07 2018 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Thu, 13 Sep 2018 16:32:07 +0000 Subject: [openstack-dev] [octavia] Optimize the query of the octavia database In-Reply-To: References: Message-ID: <423483AB-0159-4C01-9CC5-A61AB24A4341@blizzard.com> This was solved in neutron-lbaas recently, maybe we could adopt the same method for Octavia? Sent from my iPhone On Sep 13, 2018, at 4:54 AM, Jeff Yang > wrote: Hi, All As octavia resources increase, I found that running the "openstack loadbalancer list" command takes longer and longer. Sometimes a 504 error is reported. By reading the code, I found that octavia will performs complex left outer join queries when acquiring resources such as loadbalancer, listener, pool, etc. in order to only make one trip to the database. Reference code: http://paste.openstack.org/show/730022 Line 133 Generated SQL statements: http://paste.openstack.org/show/730021 So, I suggest that adjust the query strategy to provide different join queries for different resources. https://storyboard.openstack.org/#!/story/2003751 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Thu Sep 13 16:38:31 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 13 Sep 2018 10:38:31 -0600 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C19A62A@EX10MBOX03.pnnl.gov> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> <1A3C52DFCD06494D8528644858247BF01C19A62A@EX10MBOX03.pnnl.gov> Message-ID: On Thu, Sep 13, 2018 at 10:15 AM Fox, Kevin M wrote: > How about stated this way, > Its the tc's responsibility to get it done. Either by delegating the > activity, or by doing it themselves. But either way, it needs to get done. > Its a ball that has been dropped too much in OpenStacks history. If no one > is ultimately responsible, balls will keep getting dropped. > > Thanks, > Kevin > +1 Kevin -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Sep 13 17:39:56 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 13 Sep 2018 11:39:56 -0600 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-31 update In-Reply-To: <1536782190-sup-6197@lrrr.local> References: <84f4ab63-5790-1ba8-7be2-eadc98f3b3ae@gmail.com> <1536782190-sup-6197@lrrr.local> Message-ID: <3842c137-11fd-d40b-aea8-3e8642be0800@nemebean.com> On 09/12/2018 01:57 PM, Doug Hellmann wrote: > Excerpts from Ben Nemec's message of 2018-09-12 13:51:16 -0600: >> >> On 09/04/2018 06:49 PM, Matt Riedemann wrote: >>> On 9/4/2018 6:39 PM, Ben Nemec wrote: >>>> Would it be helpful to factor some of the common code out into an Oslo >>>> library so projects basically just have to subclass, implement check >>>> functions, and add them to the _upgrade_checks dict? It's not a huge >>>> amount of code, but a bunch of it seems like it would need to be >>>> copy-pasted into every project. I have a tentative topic on the Oslo >>>> PTG schedule for this but figured I should check if it's something we >>>> even want to pursue. >>> >>> Yeah I'm not opposed to trying to pull the nova stuff into a common >>> library for easier consumption in other projects, I just haven't devoted >>> the time for it, nor will I probably have the time to do it. If others >>> are willing to investigate that it would be great. >>> >> >> Okay, here's a first shot at such a library: >> https://github.com/cybertron/oslo.upgradecheck >> >> Lots of rough edges that would need to be addressed before it could be >> released, but it demonstrates the basic idea I had in mind for this. The >> upgradecheck module contains the common code, and __main__.py is a demo >> use of the code. >> >> -Ben >> > > Nice! Let's get that imported into gerrit and keep iterating on it to > make it usable for the goal. Maybe we can get one of the projects most > interested in working on this goal early to help with testing and UX > feedback. > Okay, I got all the test jobs working and even added a few basic unit tests. I think it's about ready for import so I'll take a look at doing that soon. From jim at jimrollenhagen.com Thu Sep 13 18:08:08 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Thu, 13 Sep 2018 12:08:08 -0600 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: <1536783914-sup-2738@lrrr.local> References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> <1536783914-sup-2738@lrrr.local> Message-ID: On Wed, Sep 12, 2018 at 2:28 PM, Doug Hellmann wrote: > Excerpts from Doug Hellmann's message of 2018-09-12 12:04:02 -0600: > > Excerpts from Clark Boylan's message of 2018-09-12 10:44:55 -0700: > > > On Wed, Sep 12, 2018, at 10:23 AM, Jim Rollenhagen wrote: > > > > The process of operators upgrading Python versions across their > fleet came > > > > up this morning. It's fairly obvious that operators will want to do > this in > > > > a rolling fashion. > > > > > > > > Has anyone considered doing this in CI? For example, running > multinode > > > > grenade with python 2 on one node and python 3 on the other node. > > > > > > > > Should we (openstack) test this situation, or even care? > > > > > > > > > > This came up in a Vancouver summit session (the python3 one I think). > General consensus there seemed to be that we should have grenade jobs that > run python2 on the old side and python3 on the new side and test the update > from one to another through a release that way. Additionally there was > thought that the nova partial job (and similar grenade jobs) could hold the > non upgraded node on python2 and that would talk to a python3 control plane. > > > > > > I haven't seen or heard of anyone working on this yet though. > > > > > > Clark > > > > > > > IIRC, we also talked about not supporting multiple versions of > > python on a given node, so all of the services on a node would need > > to be upgraded together. > > > > Doug > > I spent a little time talking with the QA team about setting up > this job, and Attila pointed out that we should think about what > exactly we think would break during a 2-to-3 in-place upgrade like > this. > > Keeping in mind that we are still testing initial installation under > both versions and upgrades under python 2, do we have any specific > concerns about the python *version* causing upgrade issues? > A specific example brought up in the ironic room was the way we encode exceptions in oslo.messaging for transmitting over RPC. I know that we've found encoding bugs in that in the past, and one can imagine that RPC between a service running on py2 and a service running on py3 could have similar issues. It's definitely edge cases that we'd be catching here (if any), so I'm personally fine with assuming it will just work. But I wanted to pose the question to the list, as we agreed this isn't only an ironic problem. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgagne at calavera.ca Thu Sep 13 18:12:56 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Thu, 13 Sep 2018 14:12:56 -0400 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: <1536775296-sup-6148@lrrr.local> References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> Message-ID: On Wed, Sep 12, 2018 at 2:04 PM, Doug Hellmann wrote: > > IIRC, we also talked about not supporting multiple versions of > python on a given node, so all of the services on a node would need > to be upgraded together. > Will services support both versions at some point for the same OpenStack release? Or is it already the case? I would like to avoid having to upgrade Nova, Neutron and Ceilometer at the same time since all end up running on a compute node and sharing the same python version. -- Mathieu From samuel at cassi.ba Thu Sep 13 19:58:28 2018 From: samuel at cassi.ba (Samuel Cassiba) Date: Thu, 13 Sep 2018 12:58:28 -0700 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C19A62A@EX10MBOX03.pnnl.gov> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> <1A3C52DFCD06494D8528644858247BF01C19A62A@EX10MBOX03.pnnl.gov> Message-ID: On Thu, Sep 13, 2018 at 9:14 AM, Fox, Kevin M wrote: > How about stated this way, > Its the tc's responsibility to get it done. Either by delegating the activity, or by doing it themselves. But either way, it needs to get done. Its a ball that has been dropped too much in OpenStacks history. If no one is ultimately responsible, balls will keep getting dropped. > > Thanks, > Kevin I see the role of TC the same way I do the PTL hat, but on more of a meta scale: too much direct involvement can stifle things. On the inverse, not enough involvement can result in people saying one's work is legacy, to be nice, or dead, at worst. All too often, we humans get hung up on the definitions of words, sometimes to the point of inaction. It seems only when someone says sod it do things move forward, regardless of anyone's level of involvement. I look to TC as the group that sets the tone, de facto product owners, to paraphrase from OpenStack's native tongue. The more hands-on an individual is with the output, TC or not, a perception arises that a given effort needs only that person's attention; thereby, setting a much different narrative than might otherwise be immediately noticed or desired. The place I see TC is making sure that there is meaningful progress on agreed-upon efforts, however that needs to exist. Sometimes that might be recruiting, but I don't see browbeating social media to be particularly valuable from an individual standpoint. Sometimes that would be collaborating through code, if it comes down to it. From an overarching perspective, I view hands-on coding by TC to be somewhat of a last resort effort due to individual commitments. Perceptions surrounding actions, like the oft used 'stepping up' phrase, creates an effect where people do not carve out enough time to effect change, becoming too busy, repeat ad infinitum. Best, Samuel From mtreinish at kortar.org Thu Sep 13 20:09:42 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Thu, 13 Sep 2018 16:09:42 -0400 Subject: [openstack-dev] [doc][i18n][infra][tc] plan for PDF and translation builds for documentation In-Reply-To: <1536844638-sup-1991@lrrr.local> References: <1536789967-sup-82@lrrr.local> <1536844638-sup-1991@lrrr.local> Message-ID: <20180913200942.GA8678@zeong> On Thu, Sep 13, 2018 at 07:23:53AM -0600, Doug Hellmann wrote: > Excerpts from Michel Peterson's message of 2018-09-13 10:04:27 +0300: > > On Thu, Sep 13, 2018 at 1:09 AM, Doug Hellmann > > wrote: > > > > > The longer version is that we want to continue to use the existing > > > tox environment in each project as the basis for the job, since > > > that allows teams to control the version of python used, the > > > dependencies installed, and add custom steps to their build (such > > > as for pre-processing the documentation). So, the new or updated > > > job will start by running "tox -e docs" as it does today. Then it > > > will run Sphinx again with the instructions to build PDF output, > > > and copy the results into the directory that the publish job will > > > use to sync to the web server. And then it will run the scripts to > > > build translated versions of the documentation as HTML, and copy > > > the results into place for publishing. > > > > > > > Just a question out of curiosity. You mention that we still want to use the > > docs environment because it allows fine grained control over how the > > documentation is created. However, as I understand, the PDF output will > > happen in a more standardized way and outside of that fine grained control, > > right? That couldn't lead to differences in both documentations? Do we have > > to even worry about that? > > Good question. The idea is to run "tox -e docs" to get the regular > HTML, then something like > > .tox/docs/bin/sphinx-build -b latex doc/build doc/build/latex > cd doc/build/latex > make > cp doc/build/latex/*.pdf doc/build/html To be fair, I've looked at this several times in the past, and sphinx's latex generation is good enough for the simple case, but on more complex documents it doesn't really work too well. For example, on nova I added this a while ago: https://github.com/openstack/nova/blob/master/tools/build_latex_pdf.sh To work around some issues with this workflow. It was enough to get the generated latex to actually compile back then. But, that script has bitrotted and needs to be updated, because the latex from sphinx for nova's docs no longer compiles. (also I submitted a patch to sphinx in the meantime to fix the check mark latex output) I'm afraid that it'll be a constant game of cat and mouse trying to get everything to build. I think that we'll find that on most projects' documentation we will need to massage the latex output from sphinx to build pdfs. -Matt Treinish > > We would run the HTML translation builds in a similar way by invoking > sphinx-build from the virtualenv repeatedly with different locale > settings based on what translations exist. > > In my earlier comment, I was thinking of the case where a team runs > a script to generate rst content files before invoking sphinx to > build the HTML. That script would have been run before the PDF > generation happens, so the content should be the same. That also > applies for anyone using sphinx add-ons, which will be available > to the latex builder because we'll be using the version of sphinx > installed in the virtualenv managed by tox. > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From thierry at openstack.org Thu Sep 13 20:30:12 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 13 Sep 2018 22:30:12 +0200 Subject: [openstack-dev] [release][searchlight] Need rights to create stable branches and tags In-Reply-To: References: Message-ID: Trinh Nguyen wrote: > Dear Release Management team, > > As we're reaching the Stein-1 milestone, I would like to prepare the > branches and tags. According to the documents, it's the job of the > Release Management team but it also says I as the PTL can do it. I > wonder which is the best way because Searchlight has missed several > milestones. > > It would be great if anyone in the Release Management team can give me > some advice. As PTL, you should request tags (releases) by proposing a change to the openstack/releases repository. The process is explained in https://releases.openstack.org/reference/using.html#requesting-a-release and also in: https://docs.openstack.org/project-team-guide/release-management.html#how-to-release No rights are actually needed, we just check that the requester is the PTL or the designated release liaison before approving the request. Let us know if you have other questions ! -- Thierry Carrez (ttx) From fungi at yuggoth.org Thu Sep 13 20:44:29 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 13 Sep 2018 20:44:29 +0000 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <9ed16b6f-bc3a-4de3-bbbd-db62ac1ec32d@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912231338.f2v5so7jelg3am7y@yuggoth.org> <9ed16b6f-bc3a-4de3-bbbd-db62ac1ec32d@gmail.com> Message-ID: <20180913204428.bydeuacugcydpfxj@yuggoth.org> On 2018-09-12 17:50:30 -0600 (-0600), Matt Riedemann wrote: [...] > Again, I'm not saying TC members should be doing all of the work > themselves. That's not realistic, especially when critical parts > of any major effort are going to involve developers from projects > on which none of the TC members are active contributors (e.g. > nova). I want to see TC members herd cats, for lack of a better > analogy, and help out technically (with code) where possible. I can respect that. I think that OpenStack made a mistake in naming its community management governance body the "technical" committee. I do agree that having TC members engage in activities with tangible outcomes is preferable, and that the needs of the users of its software should weigh heavily in prioritization decisions, but those are not the only problems our community faces nor is it as if there are no other responsibilities associated with being a TC member. > Given the repeated mention of how the "help wanted" list continues > to not draw in contributors, I think the recruiting role of the TC > should take a back seat to actually stepping in and helping work > on those items directly. For example, Sean McGinnis is taking an > active role in the operators guide and other related docs that > continue to be discussed at every face to face event since those > docs were dropped from openstack-manuals (in Pike). I completely agree that the help wanted list hasn't worked out well in practice. It was based on requests from the board of directors to provide some means of communicating to their business-focused constituency where resources would be most useful to the project. We've had a subsequent request to reorient it to be more like a set of job descriptions along with clearer business use cases explaining the benefit to them of contributing to these efforts. In my opinion it's very much the responsibility of the TC to find ways to accomplish these sorts of things as well. > I think it's fair to say that the people generally elected to the > TC are those most visible in the community (it's a popularity > contest) and those people are generally the most visible because > they have the luxury of working upstream the majority of their > time. As such, it's their duty to oversee and spend time working > on the hard cross-project technical deliverables that operators > and users are asking for, rather than think of an infinite number > of ways to try and draw *others* to help work on those gaps. But not everyone who is funded for full-time involvement with the community is necessarily "visible" in ways that make them electable. Higher-profile involvement in such activities over time is what gets them the visibility to be more easily elected to governance positions via "popularity contest" mechanics. > As I think it's the role of a PTL within a given project to have a > finger on the pulse of the technical priorities of that project > and manage the developers involved (of which the PTL certainly may > be one), it's the role of the TC to do the same across openstack > as a whole. If a PTL doesn't have the time or willingness to do > that within their project, they shouldn't be the PTL. The same > goes for TC members IMO. Completely agree, I think we might just disagree on where to strike the balance of purely technical priorities for the TC (as I personally think the TC is somewhat incorrectly named). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From emilien at redhat.com Thu Sep 13 22:43:11 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 13 Sep 2018 16:43:11 -0600 Subject: [openstack-dev] [TripleO] podman: varlink interface for nice API calls In-Reply-To: <3c8e807f-e523-10b8-b84b-05d672bb0dd0@redhat.com> References: <8e379940-0155-26c4-b377-2bb817184cd7@gmail.com> <363d36a2-c25a-d7ba-3f45-c8b3aa4e6cce@gmail.com> <78bc1c3d-4d97-5a1c-f320-bb08647e8825@gmail.com> <1A3C52DFCD06494D8528644858247BF01C183A00@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C183A4A@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C1847EA@EX10MBOX03.pnnl.gov> <3c8e807f-e523-10b8-b84b-05d672bb0dd0@redhat.com> Message-ID: I suggest that we continue the discussion in this freshly created specs: https://review.openstack.org/602480 http://logs.openstack.org/80/602480/2/check/openstack-tox-docs/9da610c/html/specs/stein/podman.html Any feedback and inputs are welcome. Thanks, On Thu, Sep 13, 2018 at 3:36 AM Bogdan Dobrelya wrote: > On 8/27/18 6:38 PM, Fox, Kevin M wrote: > > I think in this context, kubelet without all of kubernetes still has the > value that it provides an abstraction layer that podmon/paunch is being > suggested to handle. > > > > It does not need the things you mention, network, sidecar, scaleup/down, > etc. You can use as little as you want. > > > > For example, make a pod yaml per container with hostNetwork: true. it > will run just like it was on the host then. You can do just one container. > no sidecars necessary. Without the apiserver, it can't do scaleup/down even > if you wanted to. > > > > It provides declarative yaml based management of containers, similar to > paunch. so you can skip needing that component. > > That would be a step into the right direction IMO. > > > > > It also already provides crio and docker support via cri. > > > > It does provide a little bit of orchestration, in that you drive things > with declarative yaml. You drop in a yaml file in > /etc/kubernetes/manifests, and it will create the container. you delete it, > it removes the container. If you change it, it will update the container. > and if something goes wrong with the container, it will try and get it back > to the requested state automatically. And, it will recover the containers > on reboot without help. > > > > Thanks, > > Kevin > > > > ________________________________________ > > From: Sergii Golovatiuk [sgolovat at redhat.com] > > Sent: Monday, August 27, 2018 3:46 AM > > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for > nice API calls > > > > Hi, > > > > On Mon, Aug 27, 2018 at 12:16 PM, Rabi Mishra > wrote: > >> On Mon, Aug 27, 2018 at 3:25 PM, Sergii Golovatiuk > > >> wrote: > >>> > >>> Hi, > >>> > >>> On Mon, Aug 27, 2018 at 5:32 AM, Rabi Mishra > wrote: > >>>> On Mon, Aug 27, 2018 at 7:31 AM, Steve Baker > wrote: > >>> Steve mentioned kubectl (kubernetes CLI which communicates with > >> > >> > >> Not sure what he meant. May be I miss something, but not heard of > 'kubectl > >> standalone', though he might have meant standalone k8s cluster on every > node > >> as you think. > >> > >>> > >>> kube-api) not kubelet which is only one component of kubernetes. All > >>> kubernetes components may be compiled as one binary (hyperkube) which > >>> can be used to minimize footprint. Generated ansible for kubelet is > >>> not enough as kubelet doesn't have any orchestration logic. > >> > >> > >> What orchestration logic do we've with TripleO atm? AFAIK we've provide > >> roles data for service placement across nodes, right? > >> I see standalone kubelet as a first step for scheduling openstack > services > >> with in k8s cluster in the future (may be). > > > > It's half measure. I don't see any advantages of that move. We should > > either adopt whole kubernetes or doesn't use its components at all as > > the maintenance cost will be expensive. Using kubelet requires to > > resolve networking communication, scale-up/down, sidecar, or inter > > services dependencies. > > > >> > >>>>> > >>>>> This was a while ago now so this could be worth revisiting in the > >>>>> future. > >>>>> We'll be making gradual changes, the first of which is using podman > to > >>>>> manage single containers. However podman has native support for the > pod > >>>>> format, so I'm hoping we can switch to that once this transition is > >>>>> complete. Then evaluating kubectl becomes much easier. > >>>>> > >>>>>> Question. Rather then writing a middle layer to abstract both > >>>>>> container > >>>>>> engines, couldn't you just use CRI? CRI is CRI-O's native language, > >>>>>> and > >>>>>> there is support already for Docker as well. > >>>>> > >>>>> > >>>>> We're not writing a middle layer, we're leveraging one which is > already > >>>>> there. > >>>>> > >>>>> CRI-O is a socket interface and podman is a CLI interface that both > sit > >>>>> on > >>>>> top of the exact same Go libraries. At this point, switching to > podman > >>>>> needs > >>>>> a much lower development effort because we're replacing docker CLI > >>>>> calls. > >>>>> > >>>> I see good value in evaluating kubelet standalone and leveraging it's > >>>> inbuilt grpc interfaces with cri-o (rather than using podman) as a > long > >>>> term > >>>> strategy, unless we just want to provide an alternative to docker > >>>> container > >>>> runtime with cri-o. > >>> > >>> I see no value using kubelet without kubernetes IMHO. > >>> > >>> > >>>> > >>>>>> > >>>>>> > >>>>>> Thanks, > >>>>>> Kevin > >>>>>> ________________________________________ > >>>>>> From: Jay Pipes [jaypipes at gmail.com] > >>>>>> Sent: Thursday, August 23, 2018 8:36 AM > >>>>>> To: openstack-dev at lists.openstack.org > >>>>>> Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for > >>>>>> nice > >>>>>> API calls > >>>>>> > >>>>>> Dan, thanks for the details and answers. Appreciated. > >>>>>> > >>>>>> Best, > >>>>>> -jay > >>>>>> > >>>>>> On 08/23/2018 10:50 AM, Dan Prince wrote: > >>>>>>> > >>>>>>> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes > wrote: > >>>>>>>> > >>>>>>>> On 08/15/2018 04:01 PM, Emilien Macchi wrote: > >>>>>>>>> > >>>>>>>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi < > emilien at redhat.com > >>>>>>>>> > wrote: > >>>>>>>>> > >>>>>>>>> More seriously here: there is an ongoing effort to > converge > >>>>>>>>> the > >>>>>>>>> tools around containerization within Red Hat, and we, > TripleO > >>>>>>>>> are > >>>>>>>>> interested to continue the containerization of our > services > >>>>>>>>> (which > >>>>>>>>> was initially done with Docker & Docker-Distribution). > >>>>>>>>> We're looking at how these containers could be managed by > k8s > >>>>>>>>> one > >>>>>>>>> day but way before that we plan to swap out Docker and > join > >>>>>>>>> CRI-O > >>>>>>>>> efforts, which seem to be using Podman + Buildah (among > other > >>>>>>>>> things). > >>>>>>>>> > >>>>>>>>> I guess my wording wasn't the best but Alex explained way better > >>>>>>>>> here: > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52 > >>>>>>>>> > >>>>>>>>> If I may have a chance to rephrase, I guess our current intention > >>>>>>>>> is > >>>>>>>>> to > >>>>>>>>> continue our containerization and investigate how we can improve > >>>>>>>>> our > >>>>>>>>> tooling to better orchestrate the containers. > >>>>>>>>> We have a nice interface (openstack/paunch) that allows us to run > >>>>>>>>> multiple container backends, and we're currently looking outside > of > >>>>>>>>> Docker to see how we could solve our current challenges with the > >>>>>>>>> new > >>>>>>>>> tools. > >>>>>>>>> We're looking at CRI-O because it happens to be a project with a > >>>>>>>>> great > >>>>>>>>> community, focusing on some problems that we, TripleO have been > >>>>>>>>> facing > >>>>>>>>> since we containerized our services. > >>>>>>>>> > >>>>>>>>> We're doing all of this in the open, so feel free to ask any > >>>>>>>>> question. > >>>>>>>> > >>>>>>>> I appreciate your response, Emilien, thank you. Alex' responses to > >>>>>>>> Jeremy on the #openstack-tc channel were informative, thank you > >>>>>>>> Alex. > >>>>>>>> > >>>>>>>> For now, it *seems* to me that all of the chosen tooling is very > Red > >>>>>>>> Hat > >>>>>>>> centric. Which makes sense to me, considering Triple-O is a Red > Hat > >>>>>>>> product. > >>>>>>> > >>>>>>> Perhaps a slight clarification here is needed. "Director" is a Red > >>>>>>> Hat > >>>>>>> product. TripleO is an upstream project that is now largely driven > by > >>>>>>> Red Hat and is today marked as single vendor. We welcome others to > >>>>>>> contribute to the project upstream just like anybody else. > >>>>>>> > >>>>>>> And for those who don't know the history the TripleO project was > once > >>>>>>> multi-vendor as well. So a lot of the abstractions we have in place > >>>>>>> could easily be extended to support distro specific implementation > >>>>>>> details. (Kind of what I view podman as in the scope of this > thread). > >>>>>>> > >>>>>>>> I don't know how much of the current reinvention of container > >>>>>>>> runtimes > >>>>>>>> and various tooling around containers is the result of politics. I > >>>>>>>> don't > >>>>>>>> know how much is the result of certain companies wanting to "own" > >>>>>>>> the > >>>>>>>> container stack from top to bottom. Or how much is a result of > >>>>>>>> technical > >>>>>>>> disagreements that simply cannot (or will not) be resolved among > >>>>>>>> contributors in the container development ecosystem. > >>>>>>>> > >>>>>>>> Or is it some combination of the above? I don't know. > >>>>>>>> > >>>>>>>> What I *do* know is that the current "NIH du jour" mentality > >>>>>>>> currently > >>>>>>>> playing itself out in the container ecosystem -- reminding me very > >>>>>>>> much > >>>>>>>> of the Javascript ecosystem -- makes it difficult for any > potential > >>>>>>>> *consumers* of container libraries, runtimes or applications to be > >>>>>>>> confident that any choice they make towards one of the other will > be > >>>>>>>> the > >>>>>>>> *right* choice or even a *possible* choice next year -- or next > >>>>>>>> week. > >>>>>>>> Perhaps this is why things like openstack/paunch exist -- to give > >>>>>>>> you > >>>>>>>> options if something doesn't pan out. > >>>>>>> > >>>>>>> This is exactly why paunch exists. > >>>>>>> > >>>>>>> Re, the podman thing I look at it as an implementation detail. The > >>>>>>> good news is that given it is almost a parity replacement for what > we > >>>>>>> already use we'll still contribute to the OpenStack community in > >>>>>>> similar ways. Ultimately whether you run 'docker run' or 'podman > run' > >>>>>>> you end up with the same thing as far as the existing TripleO > >>>>>>> architecture goes. > >>>>>>> > >>>>>>> Dan > >>>>>>> > >>>>>>>> You have a tough job. I wish you all the luck in the world in > making > >>>>>>>> these decisions and hope politics and internal corporate > management > >>>>>>>> decisions play as little a role in them as possible. > >>>>>>>> > >>>>>>>> Best, > >>>>>>>> -jay > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > __________________________________________________________________________ > >>>>>>>> OpenStack Development Mailing List (not for usage questions) > >>>>>>>> Unsubscribe: > >>>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > __________________________________________________________________________ > >>>>>>> OpenStack Development Mailing List (not for usage questions) > >>>>>>> Unsubscribe: > >>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>>>>> > >>>>>> > >>>>>> > >>>>>> > __________________________________________________________________________ > >>>>>> OpenStack Development Mailing List (not for usage questions) > >>>>>> Unsubscribe: > >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>>>> > >>>>>> > >>>>>> > >>>>>> > __________________________________________________________________________ > >>>>>> OpenStack Development Mailing List (not for usage questions) > >>>>>> Unsubscribe: > >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>>>> > >>>>>> > >>>>>> > >>>>>> > __________________________________________________________________________ > >>>>>> OpenStack Development Mailing List (not for usage questions) > >>>>>> Unsubscribe: > >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > __________________________________________________________________________ > >>>>> OpenStack Development Mailing List (not for usage questions) > >>>>> Unsubscribe: > >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > >>>> > >>>> > >>>> > >>>> -- > >>>> Regards, > >>>> Rabi Mishra > >>>> > >>>> > >>>> > >>>> > __________________________________________________________________________ > >>>> OpenStack Development Mailing List (not for usage questions) > >>>> Unsubscribe: > >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > >>> > >>> > >>> > >>> -- > >>> Best Regards, > >>> Sergii Golovatiuk > >>> > >>> > __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> > >> > >> -- > >> Regards, > >> Rabi Mishra > >> > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > > > > -- > > Best Regards, > > Sergii Golovatiuk > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Sep 13 23:39:47 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 13 Sep 2018 17:39:47 -0600 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> <1536783914-sup-2738@lrrr.local> Message-ID: <1536881798-sup-5841@lrrr.local> Excerpts from Jim Rollenhagen's message of 2018-09-13 12:08:08 -0600: > On Wed, Sep 12, 2018 at 2:28 PM, Doug Hellmann > wrote: > > > Excerpts from Doug Hellmann's message of 2018-09-12 12:04:02 -0600: > > > Excerpts from Clark Boylan's message of 2018-09-12 10:44:55 -0700: > > > > On Wed, Sep 12, 2018, at 10:23 AM, Jim Rollenhagen wrote: > > > > > The process of operators upgrading Python versions across their > > fleet came > > > > > up this morning. It's fairly obvious that operators will want to do > > this in > > > > > a rolling fashion. > > > > > > > > > > Has anyone considered doing this in CI? For example, running > > multinode > > > > > grenade with python 2 on one node and python 3 on the other node. > > > > > > > > > > Should we (openstack) test this situation, or even care? > > > > > > > > > > > > > This came up in a Vancouver summit session (the python3 one I think). > > General consensus there seemed to be that we should have grenade jobs that > > run python2 on the old side and python3 on the new side and test the update > > from one to another through a release that way. Additionally there was > > thought that the nova partial job (and similar grenade jobs) could hold the > > non upgraded node on python2 and that would talk to a python3 control plane. > > > > > > > > I haven't seen or heard of anyone working on this yet though. > > > > > > > > Clark > > > > > > > > > > IIRC, we also talked about not supporting multiple versions of > > > python on a given node, so all of the services on a node would need > > > to be upgraded together. > > > > > > Doug > > > > I spent a little time talking with the QA team about setting up > > this job, and Attila pointed out that we should think about what > > exactly we think would break during a 2-to-3 in-place upgrade like > > this. > > > > Keeping in mind that we are still testing initial installation under > > both versions and upgrades under python 2, do we have any specific > > concerns about the python *version* causing upgrade issues? > > > > A specific example brought up in the ironic room was the way we encode > exceptions in oslo.messaging for transmitting over RPC. I know that we've > found encoding bugs in that in the past, and one can imagine that RPC > between a service running on py2 and a service running on py3 could have > similar issues. Mixing python 2 and 3 components of the same service across nodes does seem like an interesting case. I wonder if it's something we could build a functional test job in oslo.messaging for, though, without having to test every service separately. I'd be happy if someone did that. > It's definitely edge cases that we'd be catching here (if any), so I'm > personally fine with assuming it will just work. But I wanted to pose the > question to the list, as we agreed this isn't only an ironic problem. Yes, definitely. I think it's likely to be a bit of work to set up the jobs and run them for all services, which is why I'm trying to understand if it's really needed. Thinking through the cases on the list is a good way to get folks to poke holes in any assertions, so I appreciate that you started the thread and that everyone is participating. Doug From doug at doughellmann.com Thu Sep 13 23:41:58 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 13 Sep 2018 17:41:58 -0600 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> Message-ID: <1536882001-sup-6554@lrrr.local> Excerpts from Mathieu Gagné's message of 2018-09-13 14:12:56 -0400: > On Wed, Sep 12, 2018 at 2:04 PM, Doug Hellmann wrote: > > > > IIRC, we also talked about not supporting multiple versions of > > python on a given node, so all of the services on a node would need > > to be upgraded together. > > > > Will services support both versions at some point for the same > OpenStack release? Or is it already the case? > > I would like to avoid having to upgrade Nova, Neutron and Ceilometer > at the same time since all end up running on a compute node and > sharing the same python version. We need to differentiate between what the upstream community supports and what distros support. In the meeting in Vancouver, we said that the community would support upgrading all of the services on a single node together. Distros may choose to support more complex configurations if they choose, and I'm sure patches related to any bugs would be welcome. But I don't think we can ask the community to support the infinite number of variations that would occur if we said we would test upgrading some services independently of others (unless I'm mistaken, we don't even do that for services all using the same version of python 2, today). Doug From johnsomor at gmail.com Thu Sep 13 23:45:36 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 13 Sep 2018 17:45:36 -0600 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: Message-ID: In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post" which maps to the "os--api::" format. I selected it as it uses the service-type[1], references the API resource, and then the method. So it maps well to the API reference[2] for the service. [0] https://docs.openstack.org/octavia/latest/configuration/policy.html [1] https://service-types.openstack.org/ [2] https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer Michael On Wed, Sep 12, 2018 at 12:52 PM Tim Bell wrote: > > So +1 > > > > Tim > > > > From: Lance Bragstad > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 12 September 2018 at 20:43 > To: "OpenStack Development Mailing List (not for usage questions)" , OpenStack Operators > Subject: [openstack-dev] [all] Consistent policy names > > > > The topic of having consistent policy names has popped up a few times this week. Ultimately, if we are to move forward with this, we'll need a convention. To help with that a little bit I started an etherpad [0] that includes links to policy references, basic conventions *within* that service, and some examples of each. I got through quite a few projects this morning, but there are still a couple left. > > > > The idea is to look at what we do today and see what conventions we can come up with to move towards, which should also help us determine how much each convention is going to impact services (e.g. picking a convention that will cause 70% of services to rename policies). > > > > Please have a look and we can discuss conventions in this thread. If we come to agreement, I'll start working on some documentation in oslo.policy so that it's somewhat official because starting to renaming policies. > > > > [0] https://etherpad.openstack.org/p/consistent-policy-names > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From johnsomor at gmail.com Thu Sep 13 23:56:28 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 13 Sep 2018 17:56:28 -0600 Subject: [openstack-dev] [octavia] Optimize the query of the octavia database In-Reply-To: <423483AB-0159-4C01-9CC5-A61AB24A4341@blizzard.com> References: <423483AB-0159-4C01-9CC5-A61AB24A4341@blizzard.com> Message-ID: This is a known regression in the Octavia API performance. It has an existing story[0] that is under development. You are correct, that star join is the root of the problem. Look for a patch soon. [0] https://storyboard.openstack.org/#!/story/2002933 Michael On Thu, Sep 13, 2018 at 10:32 AM Erik Olof Gunnar Andersson wrote: > > This was solved in neutron-lbaas recently, maybe we could adopt the same method for Octavia? > > Sent from my iPhone > > On Sep 13, 2018, at 4:54 AM, Jeff Yang wrote: > > Hi, All > > As octavia resources increase, I found that running the "openstack loadbalancer list" command takes longer and longer. Sometimes a 504 error is reported. > > By reading the code, I found that octavia will performs complex left outer join queries when acquiring resources such as loadbalancer, listener, pool, etc. in order to only make one trip to the database. > Reference code: http://paste.openstack.org/show/730022 Line 133 > Generated SQL statements: http://paste.openstack.org/show/730021 > > So, I suggest that adjust the query strategy to provide different join queries for different resources. > > https://storyboard.openstack.org/#!/story/2003751 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mgagne at calavera.ca Fri Sep 14 00:09:12 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Thu, 13 Sep 2018 20:09:12 -0400 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: <1536882001-sup-6554@lrrr.local> References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> <1536882001-sup-6554@lrrr.local> Message-ID: On Thu, Sep 13, 2018 at 7:41 PM, Doug Hellmann wrote: > Excerpts from Mathieu Gagné's message of 2018-09-13 14:12:56 -0400: >> On Wed, Sep 12, 2018 at 2:04 PM, Doug Hellmann wrote: >> > >> > IIRC, we also talked about not supporting multiple versions of >> > python on a given node, so all of the services on a node would need >> > to be upgraded together. >> > >> >> Will services support both versions at some point for the same >> OpenStack release? Or is it already the case? >> >> I would like to avoid having to upgrade Nova, Neutron and Ceilometer >> at the same time since all end up running on a compute node and >> sharing the same python version. > > We need to differentiate between what the upstream community supports > and what distros support. In the meeting in Vancouver, we said that > the community would support upgrading all of the services on a > single node together. Distros may choose to support more complex > configurations if they choose, and I'm sure patches related to any > bugs would be welcome. We maintain and build our own packages with virtualenv. We aren't bound to distribution packages. > But I don't think we can ask the community > to support the infinite number of variations that would occur if > we said we would test upgrading some services independently of > others (unless I'm mistaken, we don't even do that for services > all using the same version of python 2, today). This contradicts what I heard in fishbowl sessions from core reviewers and read on IRC. People were under the false impression that you need to upgrade OpenStack in lock steps when in fact, it has never been the case. You should be able to upgrade services individually. Has it changed since? -- Mathieu From dangtrinhnt at gmail.com Fri Sep 14 00:23:35 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 14 Sep 2018 09:23:35 +0900 Subject: [openstack-dev] [release][searchlight] Need rights to create stable branches and tags In-Reply-To: References: Message-ID: Hi Thierry, Thanks for the information. I will look into those. Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Fri, Sep 14, 2018 at 5:30 AM Thierry Carrez wrote: > Trinh Nguyen wrote: > > Dear Release Management team, > > > > As we're reaching the Stein-1 milestone, I would like to prepare the > > branches and tags. According to the documents, it's the job of the > > Release Management team but it also says I as the PTL can do it. I > > wonder which is the best way because Searchlight has missed several > > milestones. > > > > It would be great if anyone in the Release Management team can give me > > some advice. > > As PTL, you should request tags (releases) by proposing a change to the > openstack/releases repository. The process is explained in > > https://releases.openstack.org/reference/using.html#requesting-a-release > > and also in: > > > https://docs.openstack.org/project-team-guide/release-management.html#how-to-release > > No rights are actually needed, we just check that the requester is the > PTL or the designated release liaison before approving the request. > > Let us know if you have other questions ! > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianyrchoi at gmail.com Fri Sep 14 01:09:26 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Fri, 14 Sep 2018 10:09:26 +0900 Subject: [openstack-dev] [doc][i18n][infra][tc] plan for PDF and translation builds for documentation In-Reply-To: <20180913200942.GA8678@zeong> References: <1536789967-sup-82@lrrr.local> <1536844638-sup-1991@lrrr.local> <20180913200942.GA8678@zeong> Message-ID: <48506b93-b9a7-6343-0463-d5f0fa5f9531@gmail.com> First of all, thanks a lot for nice summary - I would like to deeply read and put comments later. And @mtreinish, please see my reply inline: Matthew Treinish wrote on 9/14/2018 5:09 AM: > On Thu, Sep 13, 2018 at 07:23:53AM -0600, Doug Hellmann wrote: >> Excerpts from Michel Peterson's message of 2018-09-13 10:04:27 +0300: >>> On Thu, Sep 13, 2018 at 1:09 AM, Doug Hellmann >>> wrote: >>> >>>> The longer version is that we want to continue to use the existing >>>> tox environment in each project as the basis for the job, since >>>> that allows teams to control the version of python used, the >>>> dependencies installed, and add custom steps to their build (such >>>> as for pre-processing the documentation). So, the new or updated >>>> job will start by running "tox -e docs" as it does today. Then it >>>> will run Sphinx again with the instructions to build PDF output, >>>> and copy the results into the directory that the publish job will >>>> use to sync to the web server. And then it will run the scripts to >>>> build translated versions of the documentation as HTML, and copy >>>> the results into place for publishing. >>>> >>> Just a question out of curiosity. You mention that we still want to use the >>> docs environment because it allows fine grained control over how the >>> documentation is created. However, as I understand, the PDF output will >>> happen in a more standardized way and outside of that fine grained control, >>> right? That couldn't lead to differences in both documentations? Do we have >>> to even worry about that? >> Good question. The idea is to run "tox -e docs" to get the regular >> HTML, then something like >> >> .tox/docs/bin/sphinx-build -b latex doc/build doc/build/latex >> cd doc/build/latex >> make >> cp doc/build/latex/*.pdf doc/build/html > To be fair, I've looked at this several times in the past, and sphinx's latex > generation is good enough for the simple case, but on more complex documents > it doesn't really work too well. For example, on nova I added this a while ago: > > https://github.com/openstack/nova/blob/master/tools/build_latex_pdf.sh After seeing what the script is doing, I wanna divide into several parts and would like to tell with some generic approach: - svg -> png  : PDF builds ideally convert all svg files into PDF with no problems, but there are some realistic problems    such as problems on determining bounding sbox size on vector svg files, and big memory problems with lots of tags in svg files.  : Maybe it would be solved if we check all svg files with correct formatting,    or if all svg files are converted to png files with temporal changes on rst file (.svg -> .png), wouldn't it? - non-latin code problems:  : By default, Sphinx uses latex builder, which doesn't support non-latin codes and customized fonts [1].    Documentation team tried to make use of xelatex instead of latex in Sphinx configuration and now it is overridden    on openstackdocstheme >=1.20. So non-latin code would not generate problems if you use openstackdocstheme >=1.20. - other things  : I could not capture the background on other changes such as additional packages.    If u provide more background on other things, I would like to investigate on how to approach by changing a rst file    to make compatible with pdf builds or how to support all pdf builds on many project repos as much as possible. When I test PDF builds on current nova repo with master branch, it seems that the rst document is too big (876 pages with error) and more dealing with overcoming memory problems was needed. I would like to think how to overcome this, but it would be also nice if someone shares advices or comments on this. With many thanks, /Ian [1] https://tug.org/pipermail/xetex/2011-September/021324.html [2] https://review.openstack.org/#/c/552070/5/openstackdocstheme/ext.py at 227 > To work around some issues with this workflow. It was enough to get the > generated latex to actually compile back then. But, that script has bitrotted > and needs to be updated, because the latex from sphinx for nova's docs no > longer compiles. (also I submitted a patch to sphinx in the meantime to > fix the check mark latex output) I'm afraid that it'll be a constant game > of cat and mouse trying to get everything to build. > > I think that we'll find that on most projects' documentation we will need > to massage the latex output from sphinx to build pdfs. > > -Matt Treinish > >> We would run the HTML translation builds in a similar way by invoking >> sphinx-build from the virtualenv repeatedly with different locale >> settings based on what translations exist. >> >> In my earlier comment, I was thinking of the case where a team runs >> a script to generate rst content files before invoking sphinx to >> build the HTML. That script would have been run before the PDF >> generation happens, so the content should be the same. That also >> applies for anyone using sphinx add-ons, which will be available >> to the latex builder because we'll be using the version of sphinx >> installed in the virtualenv managed by tox. >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From naichuan.sun at citrix.com Fri Sep 14 01:28:12 2018 From: naichuan.sun at citrix.com (Naichuan Sun) Date: Fri, 14 Sep 2018 01:28:12 +0000 Subject: [openstack-dev] About microversion setting to enable nested resource provider In-Reply-To: References: <0e33fb6ca6484035bee76197f36b9aae@SINPEX02CL01.citrite.net> <7e200b01-4f83-95b4-8efa-8b4897c39da5@gmail.com> <90a534cec8ff4957a141af2ed1686934@SINPEX02CL01.citrite.net> <0acdc7e5-432f-fc99-4ce2-c9df53af1a3b@fried.cc> Message-ID: <9eba70d4c66a435792fe2c9c3ba596d4@SINPEX02CL01.citrite.net> Hi, Sylvain, Thank you very much for the information. It is pity that I can’t attend the meeting. I have a concern about reshaper in multi-type vgpu support. In the old vgpu support, we only have one vgpu inventory in root resource provider, which means we only support one vgpu type. When do reshape, placement will send allocations(which include just one vgpu resource allocation information) to the driver, if the host have more than one pgpu/pgpug(which support different vgpu type), how do we know which pgpu/pgpug own the allocation information? Do we need to communicate with hypervisor the confirm that? Thank you very much. BR. Naichuan Sun From: Sylvain Bauza [mailto:sbauza at redhat.com] Sent: Thursday, September 13, 2018 11:47 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] About microversion setting to enable nested resource provider Hey Naichuan, FWIW, we discussed on the missing pieces for nested resource providers. See the (currently-in-use) etherpad https://etherpad.openstack.org/p/nova-ptg-stein and lookup for "closing the gap on nested resource providers" (L144 while I speak) The fact that we are not able to schedule yet is a critical piece that we said we're going to work on it as soon as we can. -Sylvain On Thu, Sep 13, 2018 at 9:14 AM, Eric Fried > wrote: There's a patch series in progress for this: https://review.openstack.org/#/q/topic:use-nested-allocation-candidates It needs some TLC. I'm sure gibi and tetsuro would welcome some help... efried On 09/13/2018 08:31 AM, Naichuan Sun wrote: > Thank you very much, Jay. > Is there somewhere I could set microversion(some configure file?), Or just modify the source code to set it? > > BR. > Naichuan Sun > > -----Original Message----- > From: Jay Pipes [mailto:jaypipes at gmail.com] > Sent: Thursday, September 13, 2018 9:19 PM > To: Naichuan Sun >; OpenStack Development Mailing List (not for usage questions) > > Cc: melanie witt >; efried at us.ibm.com; Sylvain Bauza > > Subject: Re: About microversion setting to enable nested resource provider > > On 09/13/2018 06:39 AM, Naichuan Sun wrote: >> Hi, guys, >> >> Looks n-rp is disabled by default because microversion matches 1.29 : >> https://github.com/openstack/nova/blob/master/nova/api/openstack/place >> ment/handlers/allocation_candidate.py#L252 >> >> Anyone know how to set the microversion to enable n-rp in placement? > > It is the client which must send the 1.29+ placement API microversion header to indicate to the placement API server that the client wants to receive nested provider information in the allocation candidates response. > > Currently, nova-scheduler calls the scheduler reportclient's > get_allocation_candidates() method: > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/manager.py#L138 > > The scheduler reportclient's get_allocation_candidates() method currently passes the 1.25 placement API microversion header: > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L353 > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L53 > > In order to get the nested information returned in the allocation candidates response, that would need to be upped to 1.29. > > Best, > -jay > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From naichuan.sun at citrix.com Fri Sep 14 01:29:30 2018 From: naichuan.sun at citrix.com (Naichuan Sun) Date: Fri, 14 Sep 2018 01:29:30 +0000 Subject: [openstack-dev] About microversion setting to enable nested resource provider In-Reply-To: <0acdc7e5-432f-fc99-4ce2-c9df53af1a3b@fried.cc> References: <0e33fb6ca6484035bee76197f36b9aae@SINPEX02CL01.citrite.net> <7e200b01-4f83-95b4-8efa-8b4897c39da5@gmail.com> <90a534cec8ff4957a141af2ed1686934@SINPEX02CL01.citrite.net> <0acdc7e5-432f-fc99-4ce2-c9df53af1a3b@fried.cc> Message-ID: <9849255d51714ea08100498a2dc2dbd3@SINPEX02CL01.citrite.net> Thank you very much, Eric. Will check the patches. BR. Naichuan Sun -----Original Message----- From: Eric Fried [mailto:openstack at fried.cc] Sent: Thursday, September 13, 2018 11:14 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] About microversion setting to enable nested resource provider There's a patch series in progress for this: https://review.openstack.org/#/q/topic:use-nested-allocation-candidates It needs some TLC. I'm sure gibi and tetsuro would welcome some help... efried On 09/13/2018 08:31 AM, Naichuan Sun wrote: > Thank you very much, Jay. > Is there somewhere I could set microversion(some configure file?), Or just modify the source code to set it? > > BR. > Naichuan Sun > > -----Original Message----- > From: Jay Pipes [mailto:jaypipes at gmail.com] > Sent: Thursday, September 13, 2018 9:19 PM > To: Naichuan Sun ; OpenStack Development > Mailing List (not for usage questions) > > Cc: melanie witt ; efried at us.ibm.com; Sylvain > Bauza > Subject: Re: About microversion setting to enable nested resource > provider > > On 09/13/2018 06:39 AM, Naichuan Sun wrote: >> Hi, guys, >> >> Looks n-rp is disabled by default because microversion matches 1.29 : >> https://github.com/openstack/nova/blob/master/nova/api/openstack/plac >> e >> ment/handlers/allocation_candidate.py#L252 >> >> Anyone know how to set the microversion to enable n-rp in placement? > > It is the client which must send the 1.29+ placement API microversion header to indicate to the placement API server that the client wants to receive nested provider information in the allocation candidates response. > > Currently, nova-scheduler calls the scheduler reportclient's > get_allocation_candidates() method: > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a5 > 34410b5df/nova/scheduler/manager.py#L138 > > The scheduler reportclient's get_allocation_candidates() method currently passes the 1.25 placement API microversion header: > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a5 > 34410b5df/nova/scheduler/client/report.py#L353 > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a5 > 34410b5df/nova/scheduler/client/report.py#L53 > > In order to get the nested information returned in the allocation candidates response, that would need to be upped to 1.29. > > Best, > -jay > ______________________________________________________________________ > ____ OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Fri Sep 14 02:14:49 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 13 Sep 2018 20:14:49 -0600 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> <1536882001-sup-6554@lrrr.local> Message-ID: <1536890416-sup-8249@lrrr.local> Excerpts from Mathieu Gagné's message of 2018-09-13 20:09:12 -0400: > On Thu, Sep 13, 2018 at 7:41 PM, Doug Hellmann wrote: > > Excerpts from Mathieu Gagné's message of 2018-09-13 14:12:56 -0400: > >> On Wed, Sep 12, 2018 at 2:04 PM, Doug Hellmann wrote: > >> > > >> > IIRC, we also talked about not supporting multiple versions of > >> > python on a given node, so all of the services on a node would need > >> > to be upgraded together. > >> > > >> > >> Will services support both versions at some point for the same > >> OpenStack release? Or is it already the case? > >> > >> I would like to avoid having to upgrade Nova, Neutron and Ceilometer > >> at the same time since all end up running on a compute node and > >> sharing the same python version. > > > > We need to differentiate between what the upstream community supports > > and what distros support. In the meeting in Vancouver, we said that > > the community would support upgrading all of the services on a > > single node together. Distros may choose to support more complex > > configurations if they choose, and I'm sure patches related to any > > bugs would be welcome. > > We maintain and build our own packages with virtualenv. We aren't > bound to distribution packages. OK, I should rephrase then. I'm talking about the limits on the tests that I think are useful and reasonable to run upstream and for the community to support. > > But I don't think we can ask the community > > to support the infinite number of variations that would occur if > > we said we would test upgrading some services independently of > > others (unless I'm mistaken, we don't even do that for services > > all using the same version of python 2, today). > > This contradicts what I heard in fishbowl sessions from core reviewers > and read on IRC. > People were under the false impression that you need to upgrade > OpenStack in lock steps when in fact, it has never been the case. > You should be able to upgrade services individually. > > Has it changed since? I know that some deployments do upgrade components separately, and it works in some configurations. All we talked about in Vancouver was how we would test upgrading python 2 to python 3, and given that the community has never, as far as I know, run upgrade tests in CI that staggered the upgrades of components on a given node, there seemed no reason to add those tests just for the python 2 to 3 case. Perhaps someone on the QA team can correct me if I'm wrong about the history there. Doug From yjf1970231893 at gmail.com Fri Sep 14 03:17:38 2018 From: yjf1970231893 at gmail.com (Jeff Yang) Date: Fri, 14 Sep 2018 11:17:38 +0800 Subject: [openstack-dev] [octavia] Optimize the query of the octavia database In-Reply-To: References: <423483AB-0159-4C01-9CC5-A61AB24A4341@blizzard.com> Message-ID: Thanks: I found the correlative patch in neutron-lbaas: https://review.openstack.org/#/c/568361/ The bug was marked high level by our QA team. I need to fix it as soon as possible. Does Michael Johnson have any good suggestion? I am willing to complete the repair work of this bug. If your patch still takes a while to prepare. Michael Johnson 于2018年9月14日周五 上午7:56写道: > This is a known regression in the Octavia API performance. It has an > existing story[0] that is under development. You are correct, that > star join is the root of the problem. > Look for a patch soon. > > [0] https://storyboard.openstack.org/#!/story/2002933 > > Michael > On Thu, Sep 13, 2018 at 10:32 AM Erik Olof Gunnar Andersson > wrote: > > > > This was solved in neutron-lbaas recently, maybe we could adopt the same > method for Octavia? > > > > Sent from my iPhone > > > > On Sep 13, 2018, at 4:54 AM, Jeff Yang wrote: > > > > Hi, All > > > > As octavia resources increase, I found that running the "openstack > loadbalancer list" command takes longer and longer. Sometimes a 504 error > is reported. > > > > By reading the code, I found that octavia will performs complex left > outer join queries when acquiring resources such as loadbalancer, listener, > pool, etc. in order to only make one trip to the database. > > Reference code: http://paste.openstack.org/show/730022 Line 133 > > Generated SQL statements: http://paste.openstack.org/show/730021 > > > > So, I suggest that adjust the query strategy to provide different join > queries for different resources. > > > > https://storyboard.openstack.org/#!/story/2003751 > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtreinish at kortar.org Fri Sep 14 03:21:43 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Thu, 13 Sep 2018 23:21:43 -0400 Subject: [openstack-dev] [doc][i18n][infra][tc] plan for PDF and translation builds for documentation In-Reply-To: <48506b93-b9a7-6343-0463-d5f0fa5f9531@gmail.com> References: <1536789967-sup-82@lrrr.local> <1536844638-sup-1991@lrrr.local> <20180913200942.GA8678@zeong> <48506b93-b9a7-6343-0463-d5f0fa5f9531@gmail.com> Message-ID: <20180914032143.GA31662@zeong> On Fri, Sep 14, 2018 at 10:09:26AM +0900, Ian Y. Choi wrote: > First of all, thanks a lot for nice summary - I would like to deeply > read and put comments later. > > And @mtreinish, please see my reply inline: > > Matthew Treinish wrote on 9/14/2018 5:09 AM: > > On Thu, Sep 13, 2018 at 07:23:53AM -0600, Doug Hellmann wrote: > >> Excerpts from Michel Peterson's message of 2018-09-13 10:04:27 +0300: > >>> On Thu, Sep 13, 2018 at 1:09 AM, Doug Hellmann > >>> wrote: > >>> > >>>> The longer version is that we want to continue to use the existing > >>>> tox environment in each project as the basis for the job, since > >>>> that allows teams to control the version of python used, the > >>>> dependencies installed, and add custom steps to their build (such > >>>> as for pre-processing the documentation). So, the new or updated > >>>> job will start by running "tox -e docs" as it does today. Then it > >>>> will run Sphinx again with the instructions to build PDF output, > >>>> and copy the results into the directory that the publish job will > >>>> use to sync to the web server. And then it will run the scripts to > >>>> build translated versions of the documentation as HTML, and copy > >>>> the results into place for publishing. > >>>> > >>> Just a question out of curiosity. You mention that we still want to use the > >>> docs environment because it allows fine grained control over how the > >>> documentation is created. However, as I understand, the PDF output will > >>> happen in a more standardized way and outside of that fine grained control, > >>> right? That couldn't lead to differences in both documentations? Do we have > >>> to even worry about that? > >> Good question. The idea is to run "tox -e docs" to get the regular > >> HTML, then something like > >> > >> .tox/docs/bin/sphinx-build -b latex doc/build doc/build/latex > >> cd doc/build/latex > >> make > >> cp doc/build/latex/*.pdf doc/build/html > > To be fair, I've looked at this several times in the past, and sphinx's latex > > generation is good enough for the simple case, but on more complex documents > > it doesn't really work too well. For example, on nova I added this a while ago: > > > > https://github.com/openstack/nova/blob/master/tools/build_latex_pdf.sh > > After seeing what the script is doing, I wanna divide into several parts > and would like to tell with some generic approach: > > - svg -> png >  : PDF builds ideally convert all svg files into PDF with no problems, > but there are some realistic problems >    such as problems on determining bounding sbox size on vector svg > files, and big memory problems with lots of tags in svg files. >  : Maybe it would be solved if we check all svg files with correct > formatting, >    or if all svg files are converted to png files with temporal changes > on rst file (.svg -> .png), wouldn't it? Yeah we will have to do either. In my experience just converting to png images is normally easier. > > - non-latin code problems: >  : By default, Sphinx uses latex builder, which doesn't support > non-latin codes and customized fonts [1]. >    Documentation team tried to make use of xelatex instead of latex in > Sphinx configuration and now it is overridden >    on openstackdocstheme >=1.20. So non-latin code would not generate > problems if you use openstackdocstheme >=1.20. Ok sure, using XeTex will solve this problem. I typically still just use pdflatex so back when I pushed that script (which was over 3 years ago) I was trying to fix it by converting the non-latin characters by using latex symbol equivalents for those characters. (which is a feature built-in to sphinx, but it just misses a lot of symbols) > > - other things >  : I could not capture the background on other changes such as > additional packages. >    If u provide more background on other things, I would like to > investigate on how to approach by changing a rst file >    to make compatible with pdf builds or how to support all pdf builds > on many project repos as much as possible. The extra packages were part of the attempt to fix the non-latin characters using latex symbols. Those packages are just added there so you can call \checkmark and \ding{54} instead of ✔ and ✖. > > When I test PDF builds on current nova repo with master branch, it seems > that the rst document is too big > (876 pages with error) and more dealing with overcoming memory problems > was needed. > I would like to think how to overcome this, but it would be also nice if > someone shares advices or comments on this. Hmm, I wasn't able to even get that far. When I tried a vanilla pdf build from nova master it only compiled 540 pages before it errored out on capacity exceeded. I know that the limit is adjustable in a config file, but I'm not sure if there is a more dynamic method for adjusting it. -Matt Treinish > > > [1] https://tug.org/pipermail/xetex/2011-September/021324.html > [2] https://review.openstack.org/#/c/552070/5/openstackdocstheme/ext.py at 227 > > > To work around some issues with this workflow. It was enough to get the > > generated latex to actually compile back then. But, that script has bitrotted > > and needs to be updated, because the latex from sphinx for nova's docs no > > longer compiles. (also I submitted a patch to sphinx in the meantime to > > fix the check mark latex output) I'm afraid that it'll be a constant game > > of cat and mouse trying to get everything to build. > > > > I think that we'll find that on most projects' documentation we will need > > to massage the latex output from sphinx to build pdfs. > > > > -Matt Treinish > > > >> We would run the HTML translation builds in a similar way by invoking > >> sphinx-build from the virtualenv repeatedly with different locale > >> settings based on what translations exist. > >> > >> In my earlier comment, I was thinking of the case where a team runs > >> a script to generate rst content files before invoking sphinx to > >> build the HTML. That script would have been run before the PDF > >> generation happens, so the content should be the same. That also > >> applies for anyone using sphinx add-ons, which will be available > >> to the latex builder because we'll be using the version of sphinx > >> installed in the virtualenv managed by tox. > >> > >> > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Fri Sep 14 03:43:20 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 13 Sep 2018 21:43:20 -0600 Subject: [openstack-dev] [doc][i18n][infra][tc] plan for PDF and translation builds for documentation In-Reply-To: <20180914032143.GA31662@zeong> References: <1536789967-sup-82@lrrr.local> <1536844638-sup-1991@lrrr.local> <20180913200942.GA8678@zeong> <48506b93-b9a7-6343-0463-d5f0fa5f9531@gmail.com> <20180914032143.GA31662@zeong> Message-ID: <1536896515-sup-8367@lrrr.local> Excerpts from Matthew Treinish's message of 2018-09-13 23:21:43 -0400: > On Fri, Sep 14, 2018 at 10:09:26AM +0900, Ian Y. Choi wrote: > > When I test PDF builds on current nova repo with master branch, it seems > > that the rst document is too big > > (876 pages with error) and more dealing with overcoming memory problems > > was needed. > > I would like to think how to overcome this, but it would be also nice if > > someone shares advices or comments on this. > > Hmm, I wasn't able to even get that far. When I tried a vanilla pdf build > from nova master it only compiled 540 pages before it errored out on capacity > exceeded. I know that the limit is adjustable in a config file, but I'm not > sure if there is a more dynamic method for adjusting it. The content is organized into sections based on audience/purpose now (install, user, api, etc.). The latex builder supports extracting different sections of the content to create separate output files by starting at a different root file. I wonder if that's another reasonable approach for us to take here? Doug From mgagne at calavera.ca Fri Sep 14 03:46:29 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Thu, 13 Sep 2018 23:46:29 -0400 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: <1536890416-sup-8249@lrrr.local> References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> <1536882001-sup-6554@lrrr.local> <1536890416-sup-8249@lrrr.local> Message-ID: On Thu, Sep 13, 2018 at 10:14 PM, Doug Hellmann wrote: > Excerpts from Mathieu Gagné's message of 2018-09-13 20:09:12 -0400: >> On Thu, Sep 13, 2018 at 7:41 PM, Doug Hellmann wrote: >> > Excerpts from Mathieu Gagné's message of 2018-09-13 14:12:56 -0400: >> >> On Wed, Sep 12, 2018 at 2:04 PM, Doug Hellmann wrote: >> >> > >> >> > IIRC, we also talked about not supporting multiple versions of >> >> > python on a given node, so all of the services on a node would need >> >> > to be upgraded together. >> >> > >> >> >> >> Will services support both versions at some point for the same >> >> OpenStack release? Or is it already the case? >> >> >> >> I would like to avoid having to upgrade Nova, Neutron and Ceilometer >> >> at the same time since all end up running on a compute node and >> >> sharing the same python version. >> > >> > We need to differentiate between what the upstream community supports >> > and what distros support. In the meeting in Vancouver, we said that >> > the community would support upgrading all of the services on a >> > single node together. Distros may choose to support more complex >> > configurations if they choose, and I'm sure patches related to any >> > bugs would be welcome. >> >> We maintain and build our own packages with virtualenv. We aren't >> bound to distribution packages. > > OK, I should rephrase then. I'm talking about the limits on the > tests that I think are useful and reasonable to run upstream and > for the community to support. > >> > But I don't think we can ask the community >> > to support the infinite number of variations that would occur if >> > we said we would test upgrading some services independently of >> > others (unless I'm mistaken, we don't even do that for services >> > all using the same version of python 2, today). >> >> This contradicts what I heard in fishbowl sessions from core reviewers >> and read on IRC. >> People were under the false impression that you need to upgrade >> OpenStack in lock steps when in fact, it has never been the case. >> You should be able to upgrade services individually. >> >> Has it changed since? > > I know that some deployments do upgrade components separately, and > it works in some configurations. All we talked about in Vancouver > was how we would test upgrading python 2 to python 3, and given > that the community has never, as far as I know, run upgrade tests > in CI that staggered the upgrades of components on a given node, > there seemed no reason to add those tests just for the python 2 to > 3 case. > > Perhaps someone on the QA team can correct me if I'm wrong about the > history there. > Or maybe it's me that misinterpreted the actual impact of not supported 2 versions of Python at the same time. Lets walk through an actual upgrade scenario. I suppose the migration to Python 3 will happen around Stein and therefore affects people upgrading from Rocky to Stein. At this point, an operator should already be running Ubuntu Bionic which supports both Python 2.7 and 3.6. If that operator is using virtualenv (and not distribution packages), it's only a matter a building new virtualenv using Python 3.6 for Stein instead. This means installing both Python 2.7/3.6 on the same node should be enough to upgrade and switch to Python 3.6 on a per project/service basis. My main use case is with the compute node which has multiple services running. Come to think of it, it's a lot less impactful than I thought. Let me know if I got some details wrong. But if the steps are similar to what I described above, I no longer have concerns or objections. I think the only people who could be concerned is those doing rolling upgrading, which could impact RPC message encoding as described by Thomas. But you are already addressing it so I will just read and see where this is going. Thanks -- Mathieu From tengqim at cn.ibm.com Fri Sep 14 05:50:00 2018 From: tengqim at cn.ibm.com (Qiming Teng) Date: Fri, 14 Sep 2018 05:50:00 +0000 Subject: [openstack-dev] [senlin][stable] Nominating chenyb4 to Senlin Stable Maintainers Team In-Reply-To: References: Message-ID: <20180914054959.GA5969@rcp.sl.cloud9.ibm.com> +2 from me. Thanks. - Qiming On Mon, Sep 10, 2018 at 09:56:10AM -0700, Duc Truong wrote: > Hi Senlin Stable Team, > > I would like to nominate Yuanbin Chen (chenyb4) to the Senlin stable > review team. Yuanbin has been doing stable reviews and shown that he > understands the policy for merging stable patches [1]. > > Voting is open for 7 days. Please reply with your +1 vote in favor or > -1 as a veto vote. > > [1] https://review.openstack.org/#/q/branch:%255Estable/.*+reviewedby:cybing4%2540gmail.com > > Regards, > > Duc (dtruong) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tony at bakeyournoodle.com Fri Sep 14 07:36:49 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 14 Sep 2018 17:36:49 +1000 Subject: [openstack-dev] [stable] (ex) PTL on vacation Message-ID: <20180914073649.GA23273@thor.bakeyournoodle.com> Hi All, As Stable is no longer a project, I'm no longer a PTL so I don't really need to do this but ... I'm going on vacation for 3'ish weeks. I do plan on checking my email from time-to-time but really if anything comes up that needs urgent attention you'll need to ping stable-maint-core. I'm not fussy about my open changes so if they need fixing and you'd like them merged while I'm out feel free to upload your own revision. Have fun, I know I will ;D Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From dharmendra.kushwaha at india.nec.com Fri Sep 14 08:52:50 2018 From: dharmendra.kushwaha at india.nec.com (Dharmendra Kushwaha) Date: Fri, 14 Sep 2018 08:52:50 +0000 Subject: [openstack-dev] [Tacker] vPTG meetup schedule Message-ID: Hi all, We have planned to have our one-day virtual PTG meetup for Stein on below schedule. Please find the meeting details: Schedule: 21st September, 7:00UTC to 11:00UTC Meeting Channel: https://bluejeans.com/553456496 Etherpad link: https://etherpad.openstack.org/p/Tacker-PTG-Stein Thanks & Regards Dharmendra Kushwaha -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Fri Sep 14 12:37:42 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Fri, 14 Sep 2018 18:07:42 +0530 Subject: [openstack-dev] [TripleO] Regarding dropping Ocata related jobs from TripleO Message-ID: Hello, As Ocata release is already EOL on 27-08-2018 [1]. In TripleO, we are running Ocata jobs in TripleO CI and in promotion pipelines. Can we drop it all the jobs related to Ocata or do we need to keep some jobs to support upgrades in CI? Links: [1.] https://releases.openstack.org/ Thanks, Chandan Kumar From stdake at cisco.com Fri Sep 14 13:20:04 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Fri, 14 Sep 2018 13:20:04 +0000 Subject: [openstack-dev] [kolla] Committing proprietary plugins to OpenStack In-Reply-To: References: <6ece4952-6ea2-70e6-2b7d-3c2d4dbe8287@suse.com>, Message-ID: <1536931205555.60413@cisco.com> ?Shyam, Our policy, decided long ago, is that we would work with third party components (such as plugins) for nova, cinder, neutron, horizon, etc that were proprietary as long as the code that merges into Kolla specifically is ASL2. What is your plugin for? if its for nova, cinder, neutron, horizon, it is covered by this policy pretty much wholesale. If its a different type of system, some debate may be warranted by the core team. Cheers -steve ________________________________ From: Shyam Biradar Sent: Wednesday, September 12, 2018 5:01 AM To: Andreas Jaeger Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [kolla] Committing proprietary plugins to OpenStack Yes Andreas, whatever deployment scripts we will be pushing it will be under apache license. [logo] Shyam Biradar Software Engineer | DevOps M +91 8600266938 | shyam.biradar at trilio.io | trilio.io On Wed, Sep 12, 2018 at 5:24 PM, Andreas Jaeger > wrote: On 2018-09-12 13:21, Shyam Biradar wrote: Hi, We have a proprietary openstack plugin. We want to commit deployment scripts like containers and heat templates to upstream in tripleo and kolla project but not actual product code. Is it possible? Or How can we handle this case? Any thoughts are welcome. It's first a legal question - is everything you are pushing under the Apache license as the rest of the project that you push to? And then a policy of kolla project, so let me tag them Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Fri Sep 14 13:33:52 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 14 Sep 2018 07:33:52 -0600 Subject: [openstack-dev] About microversion setting to enable nested resource provider In-Reply-To: <9eba70d4c66a435792fe2c9c3ba596d4@SINPEX02CL01.citrite.net> References: <0e33fb6ca6484035bee76197f36b9aae@SINPEX02CL01.citrite.net> <7e200b01-4f83-95b4-8efa-8b4897c39da5@gmail.com> <90a534cec8ff4957a141af2ed1686934@SINPEX02CL01.citrite.net> <0acdc7e5-432f-fc99-4ce2-c9df53af1a3b@fried.cc> <9eba70d4c66a435792fe2c9c3ba596d4@SINPEX02CL01.citrite.net> Message-ID: Le jeu. 13 sept. 2018 à 19:29, Naichuan Sun a écrit : > Hi, Sylvain, > > > > Thank you very much for the information. It is pity that I can’t attend > the meeting. > > I have a concern about reshaper in multi-type vgpu support. > > In the old vgpu support, we only have one vgpu inventory in root resource > provider, which means we only support one vgpu type. When do reshape, > placement will send allocations(which include just one vgpu resource > allocation information) to the driver, if the host have more than one > pgpu/pgpug(which support different vgpu type), how do we know which > pgpu/pgpug own the allocation information? Do we need to communicate with > hypervisor the confirm that? > The reshape will actually move the existing allocations for a VGPU resource class to the inventory for this class that is on the child resource provider now with the reshape. Since we agreed on keeping consistent naming, there is no need to guess which is which. That said, you raise a point that was discussed during the PTG and we all agreed there was an upgrade impact as multiple vGPUs shouldn't be allowed until the reshape is done. Accordingly, see my spec I reproposed for Stein which describes the upgrade impact https://review.openstack.org/#/c/602474/ Since I'm at the PTG, we have huge time difference between you and me, but we can discuss on that point next week when I'm back (my mornings match then your afternoons) -Sylvain > > > Thank you very much. > > > > BR. > > Naichuan Sun > > > > *From:* Sylvain Bauza [mailto:sbauza at redhat.com] > *Sent:* Thursday, September 13, 2018 11:47 PM > *To:* OpenStack Development Mailing List (not for usage questions) < > openstack-dev at lists.openstack.org> > *Subject:* Re: [openstack-dev] About microversion setting to enable > nested resource provider > > > > Hey Naichuan, > > FWIW, we discussed on the missing pieces for nested resource providers. > See the (currently-in-use) etherpad > https://etherpad.openstack.org/p/nova-ptg-stein and lookup for "closing > the gap on nested resource providers" (L144 while I speak) > > > > The fact that we are not able to schedule yet is a critical piece that we > said we're going to work on it as soon as we can. > > > > -Sylvain > > > > On Thu, Sep 13, 2018 at 9:14 AM, Eric Fried wrote: > > There's a patch series in progress for this: > > https://review.openstack.org/#/q/topic:use-nested-allocation-candidates > > It needs some TLC. I'm sure gibi and tetsuro would welcome some help... > > efried > > > On 09/13/2018 08:31 AM, Naichuan Sun wrote: > > Thank you very much, Jay. > > Is there somewhere I could set microversion(some configure file?), Or > just modify the source code to set it? > > > > BR. > > Naichuan Sun > > > > -----Original Message----- > > From: Jay Pipes [mailto:jaypipes at gmail.com] > > Sent: Thursday, September 13, 2018 9:19 PM > > To: Naichuan Sun ; OpenStack Development > Mailing List (not for usage questions) > > Cc: melanie witt ; efried at us.ibm.com; Sylvain Bauza > > > Subject: Re: About microversion setting to enable nested resource > provider > > > > On 09/13/2018 06:39 AM, Naichuan Sun wrote: > >> Hi, guys, > >> > >> Looks n-rp is disabled by default because microversion matches 1.29 : > >> https://github.com/openstack/nova/blob/master/nova/api/openstack/place > >> ment/handlers/allocation_candidate.py#L252 > >> > >> Anyone know how to set the microversion to enable n-rp in placement? > > > > It is the client which must send the 1.29+ placement API microversion > header to indicate to the placement API server that the client wants to > receive nested provider information in the allocation candidates response. > > > > Currently, nova-scheduler calls the scheduler reportclient's > > get_allocation_candidates() method: > > > > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/manager.py#L138 > > > > The scheduler reportclient's get_allocation_candidates() method > currently passes the 1.25 placement API microversion header: > > > > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L353 > > > > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L53 > > > > In order to get the nested information returned in the allocation > candidates response, that would need to be upped to 1.29. > > > > Best, > > -jay > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davanum at gmail.com Fri Sep 14 14:45:05 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Fri, 14 Sep 2018 08:45:05 -0600 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <20180913204428.bydeuacugcydpfxj@yuggoth.org> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912231338.f2v5so7jelg3am7y@yuggoth.org> <9ed16b6f-bc3a-4de3-bbbd-db62ac1ec32d@gmail.com> <20180913204428.bydeuacugcydpfxj@yuggoth.org> Message-ID: Folks, Sorry for the top post - Those of you that are still at PTG, please feel free to drop in to the Clear Creek room today. Thanks, Dims On Thu, Sep 13, 2018 at 2:44 PM Jeremy Stanley wrote: > On 2018-09-12 17:50:30 -0600 (-0600), Matt Riedemann wrote: > [...] > > Again, I'm not saying TC members should be doing all of the work > > themselves. That's not realistic, especially when critical parts > > of any major effort are going to involve developers from projects > > on which none of the TC members are active contributors (e.g. > > nova). I want to see TC members herd cats, for lack of a better > > analogy, and help out technically (with code) where possible. > > I can respect that. I think that OpenStack made a mistake in naming > its community management governance body the "technical" committee. > I do agree that having TC members engage in activities with tangible > outcomes is preferable, and that the needs of the users of its > software should weigh heavily in prioritization decisions, but those > are not the only problems our community faces nor is it as if there > are no other responsibilities associated with being a TC member. > > > Given the repeated mention of how the "help wanted" list continues > > to not draw in contributors, I think the recruiting role of the TC > > should take a back seat to actually stepping in and helping work > > on those items directly. For example, Sean McGinnis is taking an > > active role in the operators guide and other related docs that > > continue to be discussed at every face to face event since those > > docs were dropped from openstack-manuals (in Pike). > > I completely agree that the help wanted list hasn't worked out well > in practice. It was based on requests from the board of directors to > provide some means of communicating to their business-focused > constituency where resources would be most useful to the project. > We've had a subsequent request to reorient it to be more like a set > of job descriptions along with clearer business use cases explaining > the benefit to them of contributing to these efforts. In my opinion > it's very much the responsibility of the TC to find ways to > accomplish these sorts of things as well. > > > I think it's fair to say that the people generally elected to the > > TC are those most visible in the community (it's a popularity > > contest) and those people are generally the most visible because > > they have the luxury of working upstream the majority of their > > time. As such, it's their duty to oversee and spend time working > > on the hard cross-project technical deliverables that operators > > and users are asking for, rather than think of an infinite number > > of ways to try and draw *others* to help work on those gaps. > > But not everyone who is funded for full-time involvement with the > community is necessarily "visible" in ways that make them electable. > Higher-profile involvement in such activities over time is what gets > them the visibility to be more easily elected to governance > positions via "popularity contest" mechanics. > > > As I think it's the role of a PTL within a given project to have a > > finger on the pulse of the technical priorities of that project > > and manage the developers involved (of which the PTL certainly may > > be one), it's the role of the TC to do the same across openstack > > as a whole. If a PTL doesn't have the time or willingness to do > > that within their project, they shouldn't be the PTL. The same > > goes for TC members IMO. > > Completely agree, I think we might just disagree on where to strike > the balance of purely technical priorities for the TC (as I > personally think the TC is somewhat incorrectly named). > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Sep 14 14:46:36 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 14 Sep 2018 08:46:36 -0600 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: Message-ID: On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson wrote: > In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post" > which maps to the "os--api::" format. > Thanks for explaining the justification, Michael. I'm curious if anyone has context on the "os-" part of the format? I've seen that pattern in a couple different projects. Does anyone know about its origin? Was it something we converted to our policy names because of API names/paths? > > I selected it as it uses the service-type[1], references the API > resource, and then the method. So it maps well to the API reference[2] > for the service. > > [0] https://docs.openstack.org/octavia/latest/configuration/policy.html > [1] https://service-types.openstack.org/ > [2] > https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer > > Michael > On Wed, Sep 12, 2018 at 12:52 PM Tim Bell wrote: > > > > So +1 > > > > > > > > Tim > > > > > > > > From: Lance Bragstad > > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > > > Date: Wednesday, 12 September 2018 at 20:43 > > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org>, OpenStack Operators < > openstack-operators at lists.openstack.org> > > Subject: [openstack-dev] [all] Consistent policy names > > > > > > > > The topic of having consistent policy names has popped up a few times > this week. Ultimately, if we are to move forward with this, we'll need a > convention. To help with that a little bit I started an etherpad [0] that > includes links to policy references, basic conventions *within* that > service, and some examples of each. I got through quite a few projects this > morning, but there are still a couple left. > > > > > > > > The idea is to look at what we do today and see what conventions we can > come up with to move towards, which should also help us determine how much > each convention is going to impact services (e.g. picking a convention that > will cause 70% of services to rename policies). > > > > > > > > Please have a look and we can discuss conventions in this thread. If we > come to agreement, I'll start working on some documentation in oslo.policy > so that it's somewhat official because starting to renaming policies. > > > > > > > > [0] https://etherpad.openstack.org/p/consistent-policy-names > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Fri Sep 14 15:01:11 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 14 Sep 2018 09:01:11 -0600 Subject: [openstack-dev] [TripleO] Regarding dropping Ocata related jobs from TripleO In-Reply-To: References: Message-ID: On Fri, Sep 14, 2018 at 6:37 AM, Chandan kumar wrote: > Hello, > > As Ocata release is already EOL on 27-08-2018 [1]. > In TripleO, we are running Ocata jobs in TripleO CI and in promotion pipelines. > Can we drop it all the jobs related to Ocata or do we need to keep some jobs > to support upgrades in CI? > I think unless there are any objections around upgrades, we can drop the promotion pipelines. It's likely that we'll also want to officially EOL the tripleo ocata branches. Thanks, -Alex > Links: > [1.] https://releases.openstack.org/ > > Thanks, > > Chandan Kumar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaosorior at redhat.com Fri Sep 14 15:20:12 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Fri, 14 Sep 2018 09:20:12 -0600 Subject: [openstack-dev] [TripleO] Regarding dropping Ocata related jobs from TripleO In-Reply-To: References: Message-ID: On 09/14/2018 09:01 AM, Alex Schultz wrote: > On Fri, Sep 14, 2018 at 6:37 AM, Chandan kumar wrote: >> Hello, >> >> As Ocata release is already EOL on 27-08-2018 [1]. >> In TripleO, we are running Ocata jobs in TripleO CI and in promotion pipelines. >> Can we drop it all the jobs related to Ocata or do we need to keep some jobs >> to support upgrades in CI? >> > I think unless there are any objections around upgrades, we can drop > the promotion pipelines. It's likely that we'll also want to > officially EOL the tripleo ocata branches. sounds good to me. > Thanks, > -Alex > >> Links: >> [1.] https://releases.openstack.org/ >> >> Thanks, >> >> Chandan Kumar >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From elod.illes at ericsson.com Fri Sep 14 16:20:49 2018 From: elod.illes at ericsson.com (=?UTF-8?B?RWzDtWQgSWxsw6lz?=) Date: Fri, 14 Sep 2018 10:20:49 -0600 Subject: [openstack-dev] [TripleO] Regarding dropping Ocata related jobs from TripleO In-Reply-To: References: Message-ID: <8b2cab2b-34c4-705e-5c3a-c310ccb919f1@ericsson.com> Hi, just a comment: Ocata release is not EOL [1][2] rather in Extended Maintenance. Do you really want to EOL TripleO stable/ocata? [1] https://releases.openstack.org/ [2] https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html Cheers, Előd On 2018-09-14 09:20, Juan Antonio Osorio Robles wrote: > > On 09/14/2018 09:01 AM, Alex Schultz wrote: >> On Fri, Sep 14, 2018 at 6:37 AM, Chandan kumar wrote: >>> Hello, >>> >>> As Ocata release is already EOL on 27-08-2018 [1]. >>> In TripleO, we are running Ocata jobs in TripleO CI and in promotion pipelines. >>> Can we drop it all the jobs related to Ocata or do we need to keep some jobs >>> to support upgrades in CI? >>> >> I think unless there are any objections around upgrades, we can drop >> the promotion pipelines. It's likely that we'll also want to >> officially EOL the tripleo ocata branches. > sounds good to me. >> Thanks, >> -Alex >> >>> Links: >>> [1.] https://releases.openstack.org/ >>> >>> Thanks, >>> >>> Chandan Kumar >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aschultz at redhat.com Fri Sep 14 16:31:59 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 14 Sep 2018 10:31:59 -0600 Subject: [openstack-dev] [TripleO] Regarding dropping Ocata related jobs from TripleO In-Reply-To: <8b2cab2b-34c4-705e-5c3a-c310ccb919f1@ericsson.com> References: <8b2cab2b-34c4-705e-5c3a-c310ccb919f1@ericsson.com> Message-ID: On Fri, Sep 14, 2018 at 10:20 AM, Elõd Illés wrote: > Hi, > > just a comment: Ocata release is not EOL [1][2] rather in Extended > Maintenance. Do you really want to EOL TripleO stable/ocata? > Yes unless there are any objections. We've already been keeping this branch alive on life support but CI has started to fail and we've just been turning it off jobs as they fail. We had not planned on extended maintenance for Ocata (or Pike). We'll likely consider that starting with Queens. We could switch it to extended maintenance but without the promotion jobs we won't have packages to run CI so it would be better to just EOL it. Thanks, -Alex > [1] https://releases.openstack.org/ > [2] > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html > > Cheers, > > Előd > > > > On 2018-09-14 09:20, Juan Antonio Osorio Robles wrote: >> >> >> On 09/14/2018 09:01 AM, Alex Schultz wrote: >>> >>> On Fri, Sep 14, 2018 at 6:37 AM, Chandan kumar >>> wrote: >>>> >>>> Hello, >>>> >>>> As Ocata release is already EOL on 27-08-2018 [1]. >>>> In TripleO, we are running Ocata jobs in TripleO CI and in promotion >>>> pipelines. >>>> Can we drop it all the jobs related to Ocata or do we need to keep some >>>> jobs >>>> to support upgrades in CI? >>>> >>> I think unless there are any objections around upgrades, we can drop >>> the promotion pipelines. It's likely that we'll also want to >>> officially EOL the tripleo ocata branches. >> >> sounds good to me. >>> >>> Thanks, >>> -Alex >>> >>>> Links: >>>> [1.] https://releases.openstack.org/ >>>> >>>> Thanks, >>>> >>>> Chandan Kumar >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Fri Sep 14 16:35:35 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 14 Sep 2018 16:35:35 +0000 Subject: [openstack-dev] [all] Ongoing spam in Freenode IRC channels In-Reply-To: <20180911165305.btj6xzokdt6v4xsq@yuggoth.org> References: <68ddbe14-8cc7-da92-c354-06f21ea66f64@redhat.com> <5f2d90fc-b96f-2284-3b86-fb6e2c6fbcc1@inaugust.com> <87bmamayd8.fsf@meyer.lemoncheese.net> <20180911165305.btj6xzokdt6v4xsq@yuggoth.org> Message-ID: <20180914163535.bdpwrivpsmmmce4i@yuggoth.org> On 2018-09-11 16:53:05 +0000 (+0000), Jeremy Stanley wrote: > On 2018-08-01 08:40:51 -0700 (-0700), James E. Blair wrote: > > Monty Taylor writes: > > > On 08/01/2018 12:45 AM, Ian Wienand wrote: > > > > I'd suggest to start, people with an interest in a channel can > > > > request +r from an IRC admin in #openstack-infra and we track > > > > it at [2] > > > > > > To mitigate the pain caused by +r - we have created a channel > > > called #openstack-unregistered and have configured the channels > > > with the +r flag to forward people to it. > [...] > > It turns out this was a very popular option, so we've gone ahead > > and performed this for all channels registered with accessbot. > [...] > > We rolled this back 5 days ago for all channels and haven't had any > new reports of in-channel spamming yet. Hopefully this means the > recent flood is behind us now but definitely let us know (replying > on this thread or in #openstack-infra on Freenode) if you see any > signs of resurgence. And then it was turned back on again a few hours ago after a new wave of spam cropped up. We'll try to continue to keep an eye on things. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Fri Sep 14 17:24:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 14 Sep 2018 11:24:03 -0600 Subject: [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it? In-Reply-To: <594cca34-a710-0c4b-200b-45f892e98581@gmail.com> References: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> <4460ff7f-7a1b-86ac-c37e-dbd7a42631ed@gmail.com> <100034a8-57f3-1eea-a792-97ca1328967c@gmail.com> <594cca34-a710-0c4b-200b-45f892e98581@gmail.com> Message-ID: On 3/28/2018 4:35 PM, Jay Pipes wrote: > On 03/28/2018 03:35 PM, Matt Riedemann wrote: >> On 3/27/2018 10:37 AM, Jay Pipes wrote: >>> >>> If we want to actually fix the issue once and for all, we need to >>> make availability zones a real thing that has a permanent identifier >>> (UUID) and store that permanent identifier in the instance (not the >>> instance metadata). >>> >>> Or we can continue to paper over major architectural weaknesses like >>> this. >> >> Stepping back a second from the rest of this thread, what if we do the >> hard fail bug fix thing, which could be backported to stable branches, >> and then we have the option of completely re-doing this with aggregate >> UUIDs as the key rather than the aggregate name? Because I think the >> former could get done in Rocky, but the latter probably not. > > I'm fine with that (and was fine with it before, just stating that > solving the problem long-term requires different thinking) > > Best, > -jay Just FYI for anyone that cared about this thread, we agreed at the Stein PTG to resolve the immediate bug [1] by blocking AZ renames while the AZ has instances in it. There won't be a microversion for that change and we'll be able to backport it (with a release note I suppose). [1] https://bugs.launchpad.net/nova/+bug/1782539 -- Thanks, Matt From kennelson11 at gmail.com Fri Sep 14 17:26:06 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 14 Sep 2018 11:26:06 -0600 Subject: [openstack-dev] [PTG] Stein Team Photo Files Message-ID: Hello! Here are the photos we took this week of various teams :) https://www.dropbox.com/sh/2pmvfkstudih2wf/AAAGg7c0bYZcWQwKDOKiSwR7a?dl=0 Enjoy! -the Kendalls (diablo_rojo & wendallkaters) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Fri Sep 14 17:49:40 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 14 Sep 2018 11:49:40 -0600 Subject: [openstack-dev] [tc]Global Reachout Proposal Message-ID: Hi all, Follow up the diversity discussion we had in the tc session this morning [0], I've proposed a resolution on facilitating technical community in large to engage in global reachout for OpenStack more efficiently. Your feedbacks are welcomed. Whether this should be a new resolution or not at the end of the day, this is a conversation worthy to have. [0] https://review.openstack.org/602697 -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Fri Sep 14 19:52:50 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 14 Sep 2018 13:52:50 -0600 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout Message-ID: This is a joint question from mnaser and me :) For the candidates who are running for tc seats, please reply to this email to indicate if you are open to use certain social media app in certain region (like Wechat in China, Line in Japan, etc.), in order to reach out to the OpenStack developers in that region and help them to connect to the upstream community as well as answering questions or other activities that will help. (sorry for the long sentence ... ) Rico and I already sign up for Wechat communication for sure :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Sep 14 20:16:16 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 14 Sep 2018 14:16:16 -0600 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: Message-ID: I don't know for sure, but I assume it is short for "OpenStack" and prefixing OpenStack policies vs. third party plugin policies for documentation purposes. I am guilty of borrowing this from existing code examples[0]. [0] http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html Michael On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad wrote: > > > > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson wrote: >> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post" >> which maps to the "os--api::" format. > > > Thanks for explaining the justification, Michael. > > I'm curious if anyone has context on the "os-" part of the format? I've seen that pattern in a couple different projects. Does anyone know about its origin? Was it something we converted to our policy names because of API names/paths? > >> >> >> I selected it as it uses the service-type[1], references the API >> resource, and then the method. So it maps well to the API reference[2] >> for the service. >> >> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html >> [1] https://service-types.openstack.org/ >> [2] https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer >> >> Michael >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell wrote: >> > >> > So +1 >> > >> > >> > >> > Tim >> > >> > >> > >> > From: Lance Bragstad >> > Reply-To: "OpenStack Development Mailing List (not for usage questions)" >> > Date: Wednesday, 12 September 2018 at 20:43 >> > To: "OpenStack Development Mailing List (not for usage questions)" , OpenStack Operators >> > Subject: [openstack-dev] [all] Consistent policy names >> > >> > >> > >> > The topic of having consistent policy names has popped up a few times this week. Ultimately, if we are to move forward with this, we'll need a convention. To help with that a little bit I started an etherpad [0] that includes links to policy references, basic conventions *within* that service, and some examples of each. I got through quite a few projects this morning, but there are still a couple left. >> > >> > >> > >> > The idea is to look at what we do today and see what conventions we can come up with to move towards, which should also help us determine how much each convention is going to impact services (e.g. picking a convention that will cause 70% of services to rename policies). >> > >> > >> > >> > Please have a look and we can discuss conventions in this thread. If we come to agreement, I'll start working on some documentation in oslo.policy so that it's somewhat official because starting to renaming policies. >> > >> > >> > >> > [0] https://etherpad.openstack.org/p/consistent-policy-names >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From jim at jimrollenhagen.com Fri Sep 14 20:22:58 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 14 Sep 2018 14:22:58 -0600 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> <1536882001-sup-6554@lrrr.local> <1536890416-sup-8249@lrrr.local> Message-ID: On Thu, Sep 13, 2018 at 9:46 PM, Mathieu Gagné wrote: > On Thu, Sep 13, 2018 at 10:14 PM, Doug Hellmann > wrote: > > Excerpts from Mathieu Gagné's message of 2018-09-13 20:09:12 -0400: > >> On Thu, Sep 13, 2018 at 7:41 PM, Doug Hellmann > wrote: > >> > Excerpts from Mathieu Gagné's message of 2018-09-13 14:12:56 -0400: > >> >> On Wed, Sep 12, 2018 at 2:04 PM, Doug Hellmann < > doug at doughellmann.com> wrote: > >> >> > > >> >> > IIRC, we also talked about not supporting multiple versions of > >> >> > python on a given node, so all of the services on a node would need > >> >> > to be upgraded together. > >> >> > > >> >> > >> >> Will services support both versions at some point for the same > >> >> OpenStack release? Or is it already the case? > >> >> > >> >> I would like to avoid having to upgrade Nova, Neutron and Ceilometer > >> >> at the same time since all end up running on a compute node and > >> >> sharing the same python version. > >> > > >> > We need to differentiate between what the upstream community supports > >> > and what distros support. In the meeting in Vancouver, we said that > >> > the community would support upgrading all of the services on a > >> > single node together. Distros may choose to support more complex > >> > configurations if they choose, and I'm sure patches related to any > >> > bugs would be welcome. > >> > >> We maintain and build our own packages with virtualenv. We aren't > >> bound to distribution packages. > > > > OK, I should rephrase then. I'm talking about the limits on the > > tests that I think are useful and reasonable to run upstream and > > for the community to support. > > > >> > But I don't think we can ask the community > >> > to support the infinite number of variations that would occur if > >> > we said we would test upgrading some services independently of > >> > others (unless I'm mistaken, we don't even do that for services > >> > all using the same version of python 2, today). > >> > >> This contradicts what I heard in fishbowl sessions from core reviewers > >> and read on IRC. > >> People were under the false impression that you need to upgrade > >> OpenStack in lock steps when in fact, it has never been the case. > >> You should be able to upgrade services individually. > >> > >> Has it changed since? > > > > I know that some deployments do upgrade components separately, and > > it works in some configurations. All we talked about in Vancouver > > was how we would test upgrading python 2 to python 3, and given > > that the community has never, as far as I know, run upgrade tests > > in CI that staggered the upgrades of components on a given node, > > there seemed no reason to add those tests just for the python 2 to > > 3 case. > > > > Perhaps someone on the QA team can correct me if I'm wrong about the > > history there. > > > > Or maybe it's me that misinterpreted the actual impact of not > supported 2 versions of Python at the same time. > > Lets walk through an actual upgrade scenario. > > I suppose the migration to Python 3 will happen around Stein and > therefore affects people upgrading from Rocky to Stein. At this point, > an operator should already be running Ubuntu Bionic which supports > both Python 2.7 and 3.6. > > If that operator is using virtualenv (and not distribution packages), > it's only a matter a building new virtualenv using Python 3.6 for > Stein instead. This means installing both Python 2.7/3.6 on the same > node should be enough to upgrade and switch to Python 3.6 on a per > project/service basis. > > My main use case is with the compute node which has multiple services > running. Come to think of it, it's a lot less impactful than I > thought. > > Let me know if I got some details wrong. But if the steps are similar > to what I described above, I no longer have concerns or objections. > The plan is to maintain support for both Python 2 and 3 in the T release (and possibly S, if the projects get py3 work done quickly). See https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html#python2-deprecation-timeline, notably: 2. All projects must complete the work for Python 3 support by the end of the T cycle, unless they are blocked for technical reasons by dependencies they rely on. 4. Existing projects under TC governance at the time this resolution is accepted must not drop support for Python 2 before the beginning of the U development cycle (currently anticipated for late 2019). This gives operators an opportunity to keep the Python 2->3 upgrade completely separate from an OpenStack upgrade. So in short, yes, you'll be fine :) // jim > > I think the only people who could be concerned is those doing rolling > upgrading, which could impact RPC message encoding as described by > Thomas. But you are already addressing it so I will just read and see > where this is going. > > Thanks > > -- > Mathieu > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Sep 14 20:47:57 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 14 Sep 2018 20:47:57 +0000 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: <20180914204756.o5umojwxvypskwti@yuggoth.org> On 2018-09-14 13:52:50 -0600 (-0600), Zhipeng Huang wrote: > This is a joint question from mnaser and me :) > > For the candidates who are running for tc seats, please reply to > this email to indicate if you are open to use certain social media > app in certain region (like Wechat in China, Line in Japan, etc.), > in order to reach out to the OpenStack developers in that region > and help them to connect to the upstream community as well as > answering questions or other activities that will help. (sorry for > the long sentence ... ) [...] I respect that tool choices can make a difference in enabling or improving our outreach to specific cultures. I'll commit to personally rejecting presence on proprietary social media services so as to demonstrate that public work can be done within our community while relying exclusively on free/libre open source software. I recognize the existence of the free software movement as a distinct culture with whom we could do a better job of connecting. If as a community we promote and embrace non-free tools we will only continue to alienate them, so I'm happy to serve as an example that it is possible to be an engaged and effective contributor to our community without compromising those ideals. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mgagne at calavera.ca Fri Sep 14 20:48:32 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Fri, 14 Sep 2018 16:48:32 -0400 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> <1536882001-sup-6554@lrrr.local> <1536890416-sup-8249@lrrr.local> Message-ID: On Fri, Sep 14, 2018 at 4:22 PM, Jim Rollenhagen wrote: > On Thu, Sep 13, 2018 at 9:46 PM, Mathieu Gagné wrote: >> >> On Thu, Sep 13, 2018 at 10:14 PM, Doug Hellmann >> wrote: >> > Excerpts from Mathieu Gagné's message of 2018-09-13 20:09:12 -0400: >> >> On Thu, Sep 13, 2018 at 7:41 PM, Doug Hellmann >> >> wrote: >> >> > Excerpts from Mathieu Gagné's message of 2018-09-13 14:12:56 -0400: >> >> >> On Wed, Sep 12, 2018 at 2:04 PM, Doug Hellmann >> >> >> wrote: >> >> >> > >> >> >> > IIRC, we also talked about not supporting multiple versions of >> >> >> > python on a given node, so all of the services on a node would >> >> >> > need >> >> >> > to be upgraded together. >> >> >> > >> >> >> >> >> >> Will services support both versions at some point for the same >> >> >> OpenStack release? Or is it already the case? >> >> >> >> >> >> I would like to avoid having to upgrade Nova, Neutron and Ceilometer >> >> >> at the same time since all end up running on a compute node and >> >> >> sharing the same python version. >> >> > >> >> > We need to differentiate between what the upstream community supports >> >> > and what distros support. In the meeting in Vancouver, we said that >> >> > the community would support upgrading all of the services on a >> >> > single node together. Distros may choose to support more complex >> >> > configurations if they choose, and I'm sure patches related to any >> >> > bugs would be welcome. >> >> >> >> We maintain and build our own packages with virtualenv. We aren't >> >> bound to distribution packages. >> > >> > OK, I should rephrase then. I'm talking about the limits on the >> > tests that I think are useful and reasonable to run upstream and >> > for the community to support. >> > >> >> > But I don't think we can ask the community >> >> > to support the infinite number of variations that would occur if >> >> > we said we would test upgrading some services independently of >> >> > others (unless I'm mistaken, we don't even do that for services >> >> > all using the same version of python 2, today). >> >> >> >> This contradicts what I heard in fishbowl sessions from core reviewers >> >> and read on IRC. >> >> People were under the false impression that you need to upgrade >> >> OpenStack in lock steps when in fact, it has never been the case. >> >> You should be able to upgrade services individually. >> >> >> >> Has it changed since? >> > >> > I know that some deployments do upgrade components separately, and >> > it works in some configurations. All we talked about in Vancouver >> > was how we would test upgrading python 2 to python 3, and given >> > that the community has never, as far as I know, run upgrade tests >> > in CI that staggered the upgrades of components on a given node, >> > there seemed no reason to add those tests just for the python 2 to >> > 3 case. >> > >> > Perhaps someone on the QA team can correct me if I'm wrong about the >> > history there. >> > >> >> Or maybe it's me that misinterpreted the actual impact of not >> supported 2 versions of Python at the same time. >> >> Lets walk through an actual upgrade scenario. >> >> I suppose the migration to Python 3 will happen around Stein and >> therefore affects people upgrading from Rocky to Stein. At this point, >> an operator should already be running Ubuntu Bionic which supports >> both Python 2.7 and 3.6. >> >> If that operator is using virtualenv (and not distribution packages), >> it's only a matter a building new virtualenv using Python 3.6 for >> Stein instead. This means installing both Python 2.7/3.6 on the same >> node should be enough to upgrade and switch to Python 3.6 on a per >> project/service basis. >> >> My main use case is with the compute node which has multiple services >> running. Come to think of it, it's a lot less impactful than I >> thought. >> >> Let me know if I got some details wrong. But if the steps are similar >> to what I described above, I no longer have concerns or objections. > > > > The plan is to maintain support for both Python 2 and 3 in the T release > (and possibly S, if the projects get py3 work done quickly). See > https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html#python2-deprecation-timeline, > notably: > > 2. All projects must complete the work for Python 3 support by the end of > the T cycle, unless they are blocked for technical reasons by dependencies > they rely on. > 4. Existing projects under TC governance at the time this resolution is > accepted must not drop support for Python 2 before the beginning of the U > development cycle (currently anticipated for late 2019). > > This gives operators an opportunity to keep the Python 2->3 upgrade > completely separate from an OpenStack upgrade. > > So in short, yes, you'll be fine :) > > // jim Thanks for the clarification. Much appreciated! > >> >> >> I think the only people who could be concerned is those doing rolling >> upgrading, which could impact RPC message encoding as described by >> Thomas. But you are already addressing it so I will just read and see >> where this is going. >> >> Thanks >> >> -- >> Mathieu >> -- Mathieu From rico.lin.guanyu at gmail.com Fri Sep 14 21:57:22 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 14 Sep 2018 15:57:22 -0600 Subject: [openstack-dev] [Openstack-sigs][Openstack-operators][all]Expose SIGs/WGs as single window for Users/Ops scenario Message-ID: The Idea has been raising around (from me or from Matt's ML), so I would like to give people more update on this (in terms of what I have been raising, what people have been feedbacks, and what init idea I can collect or I have as actions. *Why are we doing this?* The basic concept for this is to allow users/ops get a single window for important scenario/user cases or issues (here's an example [1])into traceable tasks in single story/place and ask developers be responsible (by changing the mission of government policy) to co-work on that task. SIGs/WGs are so desired to get feedbacks or use cases, so as to project teams (not gonna speak for all projects/SIGs/WGs but we like to collect for more idea for sure). And the project team got a central place to develop for specific user requirements (Edge, NFV, Self-healing, K8s). One more idea on this is that we can also use SIGs and WGs as a place for cross-project docs and those documents can be some more general information on how a user can plan for that area (again Edge, NFV, Self-healing, K8s). There also needs clear information to Users/Ops about what's the dependency cross projects which involved. Also, a potential way to expose more projects. From this step, we can plan to cross-project gating ( in projects gate or periodic) implementation *So what's triggering and feedback:* - This idea has been raising as a topic in K8S SIG, Self-healing SIG session. Feedback from K8s-sig and Self-healing-sig are generally looking forward to this. SIGs appears are desired to get use cases and user issues (I didn't so through this idea to rest SIGs/WGs yet, but place leave feedback if you're in that group). Most because this can value up SIGs/WGs on what they're interesting on. - This idea has been raising as a topic in Ops-meetup session Most of ops think it will be super if actually anyone willing to handle their own issues. The concerns about this are that we have to make some structure or guidelines to avoid a crazy number of useless issues (maybe like setup template for issues). Another feedback from an operator is that he concerns about ops should just try to go through everything in detail by themselves and contact to teams by themselves. IMO it depends on teams to set template and say you must have some specific information or even figure out which project should be in charge of which failed. - This idea has been raising as a topic in TC session Public cloud WGs also got this idea as well (and they done a good job!), appears it's a very preferred way for them. What happens to them is public cloud WG collect bunch number of use cases, but would like to see immediate actions or a traceable way to keep tracing those task. Doug: It might be hard to push developers to SIGs/WGs, but SIGs/WGs can always raise the cross-project forum. Also, it's important to let people know that who they can talk to. Melvin: Make it easier for everyone, and give a visibility. How can we possible to make one thing done is very important. Thierry: Have a way to expose the top priority which is important for OpenStack. - Also, raise to some PTLs and UCs. Generally good, Amy (super cute UC member) do ask the concern about there are manual works to bind tasks to cross bug tracing platform (like if you like to create a story in Self-healing SIG, and said it's relative to Heat, and Neutron. you create a task for Heat in that story, but you need to create a launchpad bug and link it to that story.). That issue might in now still need to be manually done, but what we might able to change is to consider migrate most of the relative teams to a single channel in long-term. I didn't get the chance to reach most of PTLs but do hope this is the place PTLs can also share their feedbacks. - There are ML in Self-healing-sig [2] not like a lot of feedback to this ML, but generally looks good *What are the actions we can do right away:* - Please give feedback to us - Give a forum for this topic for all to discuss this (I already add a brainstorm in TC etherpad, but it's across projects, UCs, TCs, WGs, SIGs). - Set up a cross-committee discuss for restructuring missions to make sure teams are responsible for hep on development, SIGs/WGs are responsible to trace task as story level and help to trigger cross-project discussion, and operators are responsible to follow the structure to send issues and provide valuable information. - We can also do an experiment on try on SIGs/WGs who and the relative projects are willing to join this for a while and see how the outcomes and adjust on them. - Can we set cross-projects as a goal for a group of projects instead of only community goal? - Also if this is a nice idea, we can have a guideline for SIGs/WGs to like suggest how they can have a cross-project gate, have a way to let users/ops to file story/issue in a format that is useful, or how to trigger the attention from other projects to join this. These are what I got from PTG, but let's start from here together and scratch what's done shall we!! P.S. Sorry about the bad writing, but have to catch a flight. [1] https://storyboard.openstack.org/#!/story/2002684 [2] http://lists.openstack.org/pipermail/openstack-sigs/2018-July/000432.html -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Fri Sep 14 22:43:58 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 14 Sep 2018 16:43:58 -0600 Subject: [openstack-dev] [nova][ptl] reduced PTL availability next week Sep 17 Message-ID: Hey all, This is just a heads up that I'll be off-site in Boston for work next week, so I won't be available on IRC (but I will be replying asynchronously to IRC messages and emails when I can). Gibi will be running the nova meeting on Thursday Sep 20 at 1400 UTC. I'm going to work on the PTG session summaries for the ML and documenting Stein cycle themes next week. I'm thinking to document the themes as part of the cycle priorities doc [1]. We've updated the PTG etherpad [2] with action items and agreements for all of the topics we covered. Please take a look at the etherpad to find what actions and agreements relevant to your topics of interest. We'll also kick off runways for Stein [3] next week. So, please feel free to start adding approved, ready-for-review items to the queue. And nova-core can start populating runways. If you have any questions about PTG topics or runways, just ask us in #openstack-nova on IRC or send a mail to the dev mailing list. Cheers, -melanie [1] https://specs.openstack.org/openstack/nova-specs/priorities/stein-priorities.html [2] https://etherpad.openstack.org/p/nova-ptg-stein [3] https://etherpad.openstack.org/p/nova-runways-stein From mriedemos at gmail.com Fri Sep 14 23:25:19 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 14 Sep 2018 17:25:19 -0600 Subject: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend Message-ID: <80609709-7b11-f920-5a2b-2b980e936cf3@gmail.com> tl;dr: I'm proposing a new parameter to the server stop (and suspend?) APIs to control if nova shelve offloads the server. Long form: This came up during the public cloud WG session this week based on a couple of feature requests [1][2]. When a user stops/suspends a server, the hypervisor frees up resources on the host but nova continues to track those resources as being used on the host so the scheduler can't put more servers there. What operators would like to do is that when a user stops a server, nova actually shelve offloads the server from the host so they can schedule new servers on that host. On start/resume of the server, nova would find a new host for the server. This also came up in Vancouver where operators would like to free up limited expensive resources like GPUs when the server is stopped. This is also the behavior in AWS. The problem with shelve is that it's great for operators but users just don't use it, maybe because they don't know what it is and stop works just fine. So how do you get users to opt into shelving their server? I've proposed a high-level blueprint [3] where we'd add a new (microversioned) parameter to the stop API with three options: * auto * offload * retain Naming is obviously up for debate. The point is we would default to auto and if auto is used, the API checks a config option to determine the behavior - offload or retain. By default we would retain for backward compatibility. For users that don't care, they get auto and it's fine. For users that do care, they either (1) don't opt into the microversion or (2) specify the specific behavior they want. I don't think we need to expose what the cloud's configuration for auto is because again, if you don't care then it doesn't matter and if you do care, you can opt out of this. "How do we get users to use the new microversion?" I'm glad you asked. Well, nova CLI defaults to using the latest available microversion negotiated between the client and the server, so by default, anyone using "nova stop" would get the 'auto' behavior (assuming the client and server are new enough to support it). Long-term, openstack client plans on doing the same version negotiation. As for the server status changes, if the server is stopped and shelved, the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I believe this is fine especially if a user is not being specific and doesn't care about the actual backend behavior. On start, the API would allow starting (unshelving) shelved offloaded (rather than just stopped) instances. Trying to hide shelved servers as stopped in the API would be overly complex IMO so I don't want to try and mask that. It is possible that a user that stopped and shelved their server could hit a NoValidHost when starting (unshelving) the server, but that really shouldn't happen in a cloud that's configuring nova to shelve by default because if they are doing this, their SLA needs to reflect they have the capacity to unshelve the server. If you can't honor that SLA, don't shelve by default. So, what are the general feelings on this before I go off and start writing up a spec? [1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791681 [2] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791679 [3] https://blueprints.launchpad.net/nova/+spec/shelve-on-stop -- Thanks, Matt From rico.lin.guanyu at gmail.com Sat Sep 15 00:23:36 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 14 Sep 2018 18:23:36 -0600 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: > > > For the candidates who are running for tc seats, please reply to this > email to indicate if you are open to use certain social media app in > certain region (like Wechat in China, Line in Japan, etc.), in order to > reach out to the OpenStack developers in that region and help them to > connect to the upstream community as well as answering questions or other > activities that will help. (sorry for the long sentence ... ) > We definitely need to reach to developers from each location in global. And a way to expose technical community to some place more close to developer and not creating to much burden to all. For me, if we can have channels for broadcast our key information cross entire community (like what's next TC/PTL election, what mission is been proposed, who people can talk to when certain issue happens, who you can talk to when you got great idea, and most importantly where are the right place you should go to) expose to all and maybe encourge community leaders to join. A list of channels is not hard to setup, but it will bring big different IMO and we can always adjust what channel we have. What we can limit here is make sure always help the new joiner to find the right place to engage. Once we got connected to local developers and community, it's easier for TC to guide all IMO. Will this work? Not sure! So why not we try and find out!:) > > Rico and I already sign up for Wechat communication for sure :) > Good to have you! Let's do it!! BTW nice dicsussion today, thanks all who is there in TC room to share. -------------- next part -------------- An HTML attachment was scrubbed... URL: From flux.adam at gmail.com Sat Sep 15 00:26:03 2018 From: flux.adam at gmail.com (Adam Harwell) Date: Fri, 14 Sep 2018 18:26:03 -0600 Subject: [openstack-dev] [octavia] Optimize the query of the octavia database In-Reply-To: References: <423483AB-0159-4C01-9CC5-A61AB24A4341@blizzard.com> Message-ID: It's high priority for me as well, so we should be able to get something done very soon, I think. Look for something early next week maybe? Thanks, --Adam On Thu, Sep 13, 2018, 21:18 Jeff Yang wrote: > Thanks: > I found the correlative patch in neutron-lbaas: > https://review.openstack.org/#/c/568361/ > > The bug was marked high level by our QA team. I need to fix it as soon > as possible. > Does Michael Johnson have any good suggestion? I am willing to > complete the > repair work of this bug. If your patch still takes a while to prepare. > > Michael Johnson 于2018年9月14日周五 上午7:56写道: > >> This is a known regression in the Octavia API performance. It has an >> existing story[0] that is under development. You are correct, that >> star join is the root of the problem. >> Look for a patch soon. >> >> [0] https://storyboard.openstack.org/#!/story/2002933 >> >> Michael >> On Thu, Sep 13, 2018 at 10:32 AM Erik Olof Gunnar Andersson >> wrote: >> > >> > This was solved in neutron-lbaas recently, maybe we could adopt the >> same method for Octavia? >> > >> > Sent from my iPhone >> > >> > On Sep 13, 2018, at 4:54 AM, Jeff Yang wrote: >> > >> > Hi, All >> > >> > As octavia resources increase, I found that running the "openstack >> loadbalancer list" command takes longer and longer. Sometimes a 504 error >> is reported. >> > >> > By reading the code, I found that octavia will performs complex left >> outer join queries when acquiring resources such as loadbalancer, listener, >> pool, etc. in order to only make one trip to the database. >> > Reference code: http://paste.openstack.org/show/730022 Line 133 >> > Generated SQL statements: http://paste.openstack.org/show/730021 >> > >> > So, I suggest that adjust the query strategy to provide different join >> queries for different resources. >> > >> > https://storyboard.openstack.org/#!/story/2003751 >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Sat Sep 15 00:51:40 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 14 Sep 2018 18:51:40 -0600 Subject: [openstack-dev] [tc][uc]Community Wide Long Term Goals Message-ID: Hi, Based upon the discussion we had at the TC session in the afternoon, I'm starting to draft a patch to add long term goal mechanism into governance. It is by no means a complete solution at the moment (still have not thought through the execution method yet to make sure the outcome), but feel free to provide your feedback at https://review.openstack.org/#/c/602799/ . -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From yjf1970231893 at gmail.com Sat Sep 15 01:01:11 2018 From: yjf1970231893 at gmail.com (Jeff Yang) Date: Sat, 15 Sep 2018 09:01:11 +0800 Subject: [openstack-dev] [octavia] Optimize the query of the octavia database In-Reply-To: References: <423483AB-0159-4C01-9CC5-A61AB24A4341@blizzard.com> Message-ID: Ok, Thank you very much for your work. Adam Harwell 于2018年9月15日周六 上午8:26写道: > It's high priority for me as well, so we should be able to get something > done very soon, I think. Look for something early next week maybe? > > Thanks, > --Adam > > On Thu, Sep 13, 2018, 21:18 Jeff Yang wrote: > >> Thanks: >> I found the correlative patch in neutron-lbaas: >> https://review.openstack.org/#/c/568361/ >> >> The bug was marked high level by our QA team. I need to fix it as >> soon as possible. >> Does Michael Johnson have any good suggestion? I am willing to >> complete the >> repair work of this bug. If your patch still takes a while to >> prepare. >> >> Michael Johnson 于2018年9月14日周五 上午7:56写道: >> >>> This is a known regression in the Octavia API performance. It has an >>> existing story[0] that is under development. You are correct, that >>> star join is the root of the problem. >>> Look for a patch soon. >>> >>> [0] https://storyboard.openstack.org/#!/story/2002933 >>> >>> Michael >>> On Thu, Sep 13, 2018 at 10:32 AM Erik Olof Gunnar Andersson >>> wrote: >>> > >>> > This was solved in neutron-lbaas recently, maybe we could adopt the >>> same method for Octavia? >>> > >>> > Sent from my iPhone >>> > >>> > On Sep 13, 2018, at 4:54 AM, Jeff Yang >>> wrote: >>> > >>> > Hi, All >>> > >>> > As octavia resources increase, I found that running the "openstack >>> loadbalancer list" command takes longer and longer. Sometimes a 504 error >>> is reported. >>> > >>> > By reading the code, I found that octavia will performs complex left >>> outer join queries when acquiring resources such as loadbalancer, listener, >>> pool, etc. in order to only make one trip to the database. >>> > Reference code: http://paste.openstack.org/show/730022 Line 133 >>> > Generated SQL statements: http://paste.openstack.org/show/730021 >>> > >>> > So, I suggest that adjust the query strategy to provide different join >>> queries for different resources. >>> > >>> > https://storyboard.openstack.org/#!/story/2003751 >>> > >>> > >>> __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> > >>> __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Sat Sep 15 03:16:27 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 14 Sep 2018 21:16:27 -0600 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: Message-ID: Ok - yeah, I'm not sure what the history behind that is either... I'm mainly curious if that's something we can/should keep or if we are opposed to dropping 'os' and 'api' from the convention (e.g. load-balancer:loadbalancer:post as opposed to os_load-balancer_api:loadbalancer:post) and just sticking with the service-type? On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson wrote: > I don't know for sure, but I assume it is short for "OpenStack" and > prefixing OpenStack policies vs. third party plugin policies for > documentation purposes. > > I am guilty of borrowing this from existing code examples[0]. > > [0] > http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html > > Michael > On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad > wrote: > > > > > > > > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson > wrote: > >> > >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post" > >> which maps to the "os--api::" format. > > > > > > Thanks for explaining the justification, Michael. > > > > I'm curious if anyone has context on the "os-" part of the format? I've > seen that pattern in a couple different projects. Does anyone know about > its origin? Was it something we converted to our policy names because of > API names/paths? > > > >> > >> > >> I selected it as it uses the service-type[1], references the API > >> resource, and then the method. So it maps well to the API reference[2] > >> for the service. > >> > >> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html > >> [1] https://service-types.openstack.org/ > >> [2] > https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer > >> > >> Michael > >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell wrote: > >> > > >> > So +1 > >> > > >> > > >> > > >> > Tim > >> > > >> > > >> > > >> > From: Lance Bragstad > >> > Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > >> > Date: Wednesday, 12 September 2018 at 20:43 > >> > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org>, OpenStack Operators < > openstack-operators at lists.openstack.org> > >> > Subject: [openstack-dev] [all] Consistent policy names > >> > > >> > > >> > > >> > The topic of having consistent policy names has popped up a few times > this week. Ultimately, if we are to move forward with this, we'll need a > convention. To help with that a little bit I started an etherpad [0] that > includes links to policy references, basic conventions *within* that > service, and some examples of each. I got through quite a few projects this > morning, but there are still a couple left. > >> > > >> > > >> > > >> > The idea is to look at what we do today and see what conventions we > can come up with to move towards, which should also help us determine how > much each convention is going to impact services (e.g. picking a convention > that will cause 70% of services to rename policies). > >> > > >> > > >> > > >> > Please have a look and we can discuss conventions in this thread. If > we come to agreement, I'll start working on some documentation in > oslo.policy so that it's somewhat official because starting to renaming > policies. > >> > > >> > > >> > > >> > [0] https://etherpad.openstack.org/p/consistent-policy-names > >> > > >> > _______________________________________________ > >> > OpenStack-operators mailing list > >> > OpenStack-operators at lists.openstack.org > >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From samuel at cassi.ba Sat Sep 15 06:06:41 2018 From: samuel at cassi.ba (Samuel Cassiba) Date: Fri, 14 Sep 2018 23:06:41 -0700 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: On Fri, Sep 14, 2018 at 5:25 PM Rico Lin wrote: >> >> >> For the candidates who are running for tc seats, please reply to this email to indicate if you are open to use certain social media app in certain region (like Wechat in China, Line in Japan, etc.), in order to reach out to the OpenStack developers in that region and help them to connect to the upstream community as well as answering questions or other activities that will help. (sorry for the long sentence ... ) > > > We definitely need to reach to developers from each location in global. And a way to expose technical community to some place more close to developer and not creating to much burden to all. For me, if we can have channels for broadcast our key information cross entire community (like what's next TC/PTL election, what mission is been proposed, who people can talk to when certain issue happens, who you can talk to when you got great idea, and most importantly where are the right place you should go to) expose to all and maybe encourge community leaders to join. A list of channels is not hard to setup, but it will bring big different IMO and we can always adjust what channel we have. What we can limit here is make sure always help the new joiner to find the right place to engage. > > Once we got connected to local developers and community, it's easier for TC to guide all IMO. Will this work? Not sure! So why not we try and find out!:) >> >> >> >> Rico and I already sign up for Wechat communication for sure :) > > Good to have you! Let's do it!! > > BTW nice dicsussion today, thanks all who is there in TC room to share. > I idle on the unofficial Slack group, which has sporadic activity from those looking to either connect with the community or find some kind of support or help. Despite an autoresponder telling people to go elsewhere, yet more people still sign up and ask questions. I'm not saying one needs to establish beachheads on all the outlets, but perhaps the message to get people in the right place should be better refined. As it sits, the autoresponse on Slack seems like the cheerful message from the Magratheans right before the warheads are dispatched. I'm not sure how often that results in a solid conversion without devoted community ambassadors watching these outlets, but it doesn't look very inviting from just scrolling through the default channel history. I see merit in doing more than having an autoresponder, but I've also seen first-hand what happens when otherwise diverse communities enter into a freemium contract. The net result is that people communicate less and less for various reasons, ending in the inverse of the desired effect of being more connected. Best, Samuel From skaplons at redhat.com Sat Sep 15 11:42:21 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Sat, 15 Sep 2018 05:42:21 -0600 Subject: [openstack-dev] [neutron] Pep8 job failures In-Reply-To: <23EDBF8F-6221-43E4-9320-74D070A4F97E@redhat.com> References: <23EDBF8F-6221-43E4-9320-74D070A4F97E@redhat.com> Message-ID: <07F03082-6D79-4A24-9B0E-696C8D2D99DF@redhat.com> Hi, As patch [1] is finally merged You can now rebase Your neutron patches and pep8 tests should be good. [1] https://review.openstack.org/#/c/589382/ > Wiadomość napisana przez Slawomir Kaplonski w dniu 07.09.2018, o godz. 01:30: > > Hi, > > Recently bump of eventlet to 0.24.0 was merged in requirements repo [1]. > That caused issue in Neutron and pep8 job is now failing. See [2]. So if You have pep8 failed in Your patch with error like in [3] please don’t recheck job - it will not help :) > Patch to fix that is already proposed [4]. When it will be merged, please rebase Your patch and this issue should be solved. > > [1] https://review.openstack.org/#/c/589382/ > [2] https://bugs.launchpad.net/neutron/+bug/1791178 > [3] http://logs.openstack.org/37/382037/73/gate/openstack-tox-pep8/7f200e6/job-output.txt.gz#_2018-09-06_17_48_34_700485 > [4] https://review.openstack.org/600565 > > — > Slawek Kaplonski > Senior software engineer > Red Hat > — Slawek Kaplonski Senior software engineer Red Hat From Tim.Bell at cern.ch Sat Sep 15 12:38:07 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Sat, 15 Sep 2018 12:38:07 +0000 Subject: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend In-Reply-To: <80609709-7b11-f920-5a2b-2b980e936cf3@gmail.com> References: <80609709-7b11-f920-5a2b-2b980e936cf3@gmail.com> Message-ID: <01331699-F5B4-44AF-91CF-95416A44910B@cern.ch> One extra user motivation that came up during past forums was to have a different quota for shelved instances (or remove them from the project quota all together). Currently, I believe that a shelved instance still counts towards the instances/cores quota thus the reduction of usage by the user is not reflected in the quotas. One discussion at the time was that the user is still reserving IPs so it is not zero resource usage and the instances still occupy storage. (We disabled shelving for other reasons so I'm not able to check easily) Tim -----Original Message----- From: Matt Riedemann Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Saturday, 15 September 2018 at 01:27 To: "OpenStack Development Mailing List (not for usage questions)" , "openstack-operators at lists.openstack.org" , "openstack-sigs at lists.openstack.org" Subject: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend tl;dr: I'm proposing a new parameter to the server stop (and suspend?) APIs to control if nova shelve offloads the server. Long form: This came up during the public cloud WG session this week based on a couple of feature requests [1][2]. When a user stops/suspends a server, the hypervisor frees up resources on the host but nova continues to track those resources as being used on the host so the scheduler can't put more servers there. What operators would like to do is that when a user stops a server, nova actually shelve offloads the server from the host so they can schedule new servers on that host. On start/resume of the server, nova would find a new host for the server. This also came up in Vancouver where operators would like to free up limited expensive resources like GPUs when the server is stopped. This is also the behavior in AWS. The problem with shelve is that it's great for operators but users just don't use it, maybe because they don't know what it is and stop works just fine. So how do you get users to opt into shelving their server? I've proposed a high-level blueprint [3] where we'd add a new (microversioned) parameter to the stop API with three options: * auto * offload * retain Naming is obviously up for debate. The point is we would default to auto and if auto is used, the API checks a config option to determine the behavior - offload or retain. By default we would retain for backward compatibility. For users that don't care, they get auto and it's fine. For users that do care, they either (1) don't opt into the microversion or (2) specify the specific behavior they want. I don't think we need to expose what the cloud's configuration for auto is because again, if you don't care then it doesn't matter and if you do care, you can opt out of this. "How do we get users to use the new microversion?" I'm glad you asked. Well, nova CLI defaults to using the latest available microversion negotiated between the client and the server, so by default, anyone using "nova stop" would get the 'auto' behavior (assuming the client and server are new enough to support it). Long-term, openstack client plans on doing the same version negotiation. As for the server status changes, if the server is stopped and shelved, the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I believe this is fine especially if a user is not being specific and doesn't care about the actual backend behavior. On start, the API would allow starting (unshelving) shelved offloaded (rather than just stopped) instances. Trying to hide shelved servers as stopped in the API would be overly complex IMO so I don't want to try and mask that. It is possible that a user that stopped and shelved their server could hit a NoValidHost when starting (unshelving) the server, but that really shouldn't happen in a cloud that's configuring nova to shelve by default because if they are doing this, their SLA needs to reflect they have the capacity to unshelve the server. If you can't honor that SLA, don't shelve by default. So, what are the general feelings on this before I go off and start writing up a spec? [1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791681 [2] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791679 [3] https://blueprints.launchpad.net/nova/+spec/shelve-on-stop -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jungleboyj at gmail.com Sat Sep 15 13:52:05 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Sat, 15 Sep 2018 08:52:05 -0500 Subject: [openstack-dev] [cinder][ptg] Team Photos Posted ... Message-ID: <28be461a-b7bc-79c5-79a2-fa1cafb0ef41@gmail.com> Team, Wanted to share the team photos from the PTG.  You can get them here: https://www.dropbox.com/sh/2pmvfkstudih2wf/AADynEnPDJiWIOE2nwjzBgtla/Cinder?dl=0&subfolder_nav_tracking=1 Jay From Tim.Bell at cern.ch Sat Sep 15 14:51:26 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Sat, 15 Sep 2018 14:51:26 +0000 Subject: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend In-Reply-To: <01331699-F5B4-44AF-91CF-95416A44910B@cern.ch> References: <80609709-7b11-f920-5a2b-2b980e936cf3@gmail.com> <01331699-F5B4-44AF-91CF-95416A44910B@cern.ch> Message-ID: <5D0C9FC3-38EF-4F8E-B6F0-7B3B7DD508C0@cern.ch> Found the previous discussion at http://lists.openstack.org/pipermail/openstack-operators/2016-August/011321.html from 2016. Tim -----Original Message----- From: Tim Bell Date: Saturday, 15 September 2018 at 14:38 To: "OpenStack Development Mailing List (not for usage questions)" , "openstack-operators at lists.openstack.org" , "openstack-sigs at lists.openstack.org" Subject: Re: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend One extra user motivation that came up during past forums was to have a different quota for shelved instances (or remove them from the project quota all together). Currently, I believe that a shelved instance still counts towards the instances/cores quota thus the reduction of usage by the user is not reflected in the quotas. One discussion at the time was that the user is still reserving IPs so it is not zero resource usage and the instances still occupy storage. (We disabled shelving for other reasons so I'm not able to check easily) Tim -----Original Message----- From: Matt Riedemann Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Saturday, 15 September 2018 at 01:27 To: "OpenStack Development Mailing List (not for usage questions)" , "openstack-operators at lists.openstack.org" , "openstack-sigs at lists.openstack.org" Subject: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend tl;dr: I'm proposing a new parameter to the server stop (and suspend?) APIs to control if nova shelve offloads the server. Long form: This came up during the public cloud WG session this week based on a couple of feature requests [1][2]. When a user stops/suspends a server, the hypervisor frees up resources on the host but nova continues to track those resources as being used on the host so the scheduler can't put more servers there. What operators would like to do is that when a user stops a server, nova actually shelve offloads the server from the host so they can schedule new servers on that host. On start/resume of the server, nova would find a new host for the server. This also came up in Vancouver where operators would like to free up limited expensive resources like GPUs when the server is stopped. This is also the behavior in AWS. The problem with shelve is that it's great for operators but users just don't use it, maybe because they don't know what it is and stop works just fine. So how do you get users to opt into shelving their server? I've proposed a high-level blueprint [3] where we'd add a new (microversioned) parameter to the stop API with three options: * auto * offload * retain Naming is obviously up for debate. The point is we would default to auto and if auto is used, the API checks a config option to determine the behavior - offload or retain. By default we would retain for backward compatibility. For users that don't care, they get auto and it's fine. For users that do care, they either (1) don't opt into the microversion or (2) specify the specific behavior they want. I don't think we need to expose what the cloud's configuration for auto is because again, if you don't care then it doesn't matter and if you do care, you can opt out of this. "How do we get users to use the new microversion?" I'm glad you asked. Well, nova CLI defaults to using the latest available microversion negotiated between the client and the server, so by default, anyone using "nova stop" would get the 'auto' behavior (assuming the client and server are new enough to support it). Long-term, openstack client plans on doing the same version negotiation. As for the server status changes, if the server is stopped and shelved, the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I believe this is fine especially if a user is not being specific and doesn't care about the actual backend behavior. On start, the API would allow starting (unshelving) shelved offloaded (rather than just stopped) instances. Trying to hide shelved servers as stopped in the API would be overly complex IMO so I don't want to try and mask that. It is possible that a user that stopped and shelved their server could hit a NoValidHost when starting (unshelving) the server, but that really shouldn't happen in a cloud that's configuring nova to shelve by default because if they are doing this, their SLA needs to reflect they have the capacity to unshelve the server. If you can't honor that SLA, don't shelve by default. So, what are the general feelings on this before I go off and start writing up a spec? [1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791681 [2] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791679 [3] https://blueprints.launchpad.net/nova/+spec/shelve-on-stop -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From morgan.fainberg at gmail.com Sat Sep 15 15:01:10 2018 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Sat, 15 Sep 2018 08:01:10 -0700 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: Message-ID: I am generally opposed to needlessly prefixing things with "os". I would advocate to drop it. On Fri, Sep 14, 2018, 20:17 Lance Bragstad wrote: > Ok - yeah, I'm not sure what the history behind that is either... > > I'm mainly curious if that's something we can/should keep or if we are > opposed to dropping 'os' and 'api' from the convention (e.g. > load-balancer:loadbalancer:post as opposed to > os_load-balancer_api:loadbalancer:post) and just sticking with the > service-type? > > On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson > wrote: > >> I don't know for sure, but I assume it is short for "OpenStack" and >> prefixing OpenStack policies vs. third party plugin policies for >> documentation purposes. >> >> I am guilty of borrowing this from existing code examples[0]. >> >> [0] >> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html >> >> Michael >> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad >> wrote: >> > >> > >> > >> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson >> wrote: >> >> >> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post" >> >> which maps to the "os--api::" format. >> > >> > >> > Thanks for explaining the justification, Michael. >> > >> > I'm curious if anyone has context on the "os-" part of the format? I've >> seen that pattern in a couple different projects. Does anyone know about >> its origin? Was it something we converted to our policy names because of >> API names/paths? >> > >> >> >> >> >> >> I selected it as it uses the service-type[1], references the API >> >> resource, and then the method. So it maps well to the API reference[2] >> >> for the service. >> >> >> >> [0] >> https://docs.openstack.org/octavia/latest/configuration/policy.html >> >> [1] https://service-types.openstack.org/ >> >> [2] >> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer >> >> >> >> Michael >> >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell wrote: >> >> > >> >> > So +1 >> >> > >> >> > >> >> > >> >> > Tim >> >> > >> >> > >> >> > >> >> > From: Lance Bragstad >> >> > Reply-To: "OpenStack Development Mailing List (not for usage >> questions)" >> >> > Date: Wednesday, 12 September 2018 at 20:43 >> >> > To: "OpenStack Development Mailing List (not for usage questions)" < >> openstack-dev at lists.openstack.org>, OpenStack Operators < >> openstack-operators at lists.openstack.org> >> >> > Subject: [openstack-dev] [all] Consistent policy names >> >> > >> >> > >> >> > >> >> > The topic of having consistent policy names has popped up a few >> times this week. Ultimately, if we are to move forward with this, we'll >> need a convention. To help with that a little bit I started an etherpad [0] >> that includes links to policy references, basic conventions *within* that >> service, and some examples of each. I got through quite a few projects this >> morning, but there are still a couple left. >> >> > >> >> > >> >> > >> >> > The idea is to look at what we do today and see what conventions we >> can come up with to move towards, which should also help us determine how >> much each convention is going to impact services (e.g. picking a convention >> that will cause 70% of services to rename policies). >> >> > >> >> > >> >> > >> >> > Please have a look and we can discuss conventions in this thread. If >> we come to agreement, I'll start working on some documentation in >> oslo.policy so that it's somewhat official because starting to renaming >> policies. >> >> > >> >> > >> >> > >> >> > [0] https://etherpad.openstack.org/p/consistent-policy-names >> >> > >> >> > _______________________________________________ >> >> > OpenStack-operators mailing list >> >> > OpenStack-operators at lists.openstack.org >> >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Sat Sep 15 15:30:55 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 15 Sep 2018 09:30:55 -0600 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-30 update Message-ID: <3205ef53-9021-ea3d-10cd-ab27f885a219@gmail.com> Just a couple of updates this week. * I have assigned PTLs (for projects that have PTLs [1]) to their respective tasks in StoryBoard [2]. If someone else on your team is planning on working on the pre-upgrade check goal then please just reassign ownership of the task. * I have started going through some project release notes looking for upgrade impacts and leaving notes in the task assigned per project. There were some questions at the PTG about what some projects could add for pre-upgrade checks so check your task to see if I've left any thoughts. I have not gone through all projects yet. * Ben Nemec has extracted the common upgrade check CLI framework into a library [3] (thanks Ben!) and is working on getting that imported into Gerrit. It would be great if projects that start working on the goal can try using that library and provide feedback. [1] https://governance.openstack.org/election/results/stein/ptl.html [2] https://storyboard.openstack.org/#!/story/2003657 [3] https://github.com/cybertron/oslo.upgradecheck -- Thanks, Matt From mriedemos at gmail.com Sat Sep 15 15:52:28 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 15 Sep 2018 09:52:28 -0600 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: On 9/14/2018 1:52 PM, Zhipeng Huang wrote: > This is a joint question from mnaser and me :) > > For the candidates who are running for tc seats, please reply to this > email to indicate if you are open to use certain social media app in > certain region (like Wechat in China, Line in Japan, etc.), in order to > reach out to the OpenStack developers in that region and help them to > connect to the upstream community as well as answering questions or > other activities that will help. (sorry for the long sentence ... ) > > Rico and I already sign up for Wechat communication for sure :) Having had some experience with WeChat, I can't imagine I'd be very useful in a nova channel in WeChat since the majority of people in that group wouldn't be speaking English so I wouldn't be of much help, unless someone directly asked me a question in English. I realize the double standard here with expecting non-native English speakers to show up in the #openstack-nova freenode IRC channel to ask questions. It's definitely a hard problem when people simply can't speak the same language and I don't have a great solution. Probably the best common solution we have is having more people across time zones and language barriers engaging in more discussion in the mailing list (and Gerrit reviews of course). So maybe that means if you're in WeChat and someone is blocked or has a bigger question for a specific project team, encourage them to send an email to the dev ML - but that requires ambassadors to be in WeChat channels to make that suggestion. I think of this like working with product teams within your own company. Lots of those people aren't active upstream contributors and to avoid being the middleman (and thus bottleneck) for all communication between upstream and downstream teams, I've encouraged the downstream folk to send an email upstream to start a discussion. -- Thanks, Matt From emilien at redhat.com Sat Sep 15 17:42:37 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sat, 15 Sep 2018 11:42:37 -0600 Subject: [openstack-dev] [puppet] [placement] Message-ID: I'm currently taking care of creating puppet-placement: https://review.openstack.org/#/c/602870/ https://review.openstack.org/#/c/602871/ https://review.openstack.org/#/c/602869/ Once these merge, we'll use cookiecutter, and move things from puppet-nova. We'll also find a way to call puppet-placement from nova::placement class, eventually. Hopefully we can make the switch to new placement during Stein! Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Sat Sep 15 18:21:06 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sat, 15 Sep 2018 12:21:06 -0600 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: Ya I think the whole point here (the question per se )is just to gauge if TC Candidates are willing to engage with regional developer in a way that is best fitting for that region. It surly will take other measures to make this entire effort work . On that I totally agree with you that there should be ambassadors to help facilitate the discussion, and the end goal is always to go to upstream instead of the convo ending in local or downstream. On Sat, Sep 15, 2018, 9:52 AM Matt Riedemann wrote: > On 9/14/2018 1:52 PM, Zhipeng Huang wrote: > > This is a joint question from mnaser and me :) > > > > For the candidates who are running for tc seats, please reply to this > > email to indicate if you are open to use certain social media app in > > certain region (like Wechat in China, Line in Japan, etc.), in order to > > reach out to the OpenStack developers in that region and help them to > > connect to the upstream community as well as answering questions or > > other activities that will help. (sorry for the long sentence ... ) > > > > Rico and I already sign up for Wechat communication for sure :) > > Having had some experience with WeChat, I can't imagine I'd be very > useful in a nova channel in WeChat since the majority of people in that > group wouldn't be speaking English so I wouldn't be of much help, unless > someone directly asked me a question in English. I realize the double > standard here with expecting non-native English speakers to show up in > the #openstack-nova freenode IRC channel to ask questions. It's > definitely a hard problem when people simply can't speak the same > language and I don't have a great solution. Probably the best common > solution we have is having more people across time zones and language > barriers engaging in more discussion in the mailing list (and Gerrit > reviews of course). So maybe that means if you're in WeChat and someone > is blocked or has a bigger question for a specific project team, > encourage them to send an email to the dev ML - but that requires > ambassadors to be in WeChat channels to make that suggestion. I think of > this like working with product teams within your own company. Lots of > those people aren't active upstream contributors and to avoid being the > middleman (and thus bottleneck) for all communication between upstream > and downstream teams, I've encouraged the downstream folk to send an > email upstream to start a discussion. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongle.li at gmail.com Sun Sep 16 02:50:00 2018 From: yongle.li at gmail.com (Fred Li) Date: Sun, 16 Sep 2018 10:50:00 +0800 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: As a non-native English speaker, it is nice-to-have that some TC or BoD can stay in the local social media, like wechat group in China. But it is also very difficult for non-native Chinese speakers to stay find useful information in ton of Chinese chats. My thoughts (even I am not a TC candidate) on this is, 1. it is kind of you to stay in the local group. 2. if we know that you are in, we will say English if we want you to notice. 3. since there is local OpenStack operation manager, hope he/she can identify some information and help to translate, or remind them to translate. My one cent. On Sun, Sep 16, 2018 at 2:21 AM, Zhipeng Huang wrote: > Ya I think the whole point here (the question per se )is just to gauge if > TC Candidates are willing to engage with regional developer in a way that > is best fitting for that region. > > It surly will take other measures to make this entire effort work . On > that I totally agree with you that there should be ambassadors to help > facilitate the discussion, and the end goal is always to go to upstream > instead of the convo ending in local or downstream. > > > On Sat, Sep 15, 2018, 9:52 AM Matt Riedemann wrote: > >> On 9/14/2018 1:52 PM, Zhipeng Huang wrote: >> > This is a joint question from mnaser and me :) >> > >> > For the candidates who are running for tc seats, please reply to this >> > email to indicate if you are open to use certain social media app in >> > certain region (like Wechat in China, Line in Japan, etc.), in order to >> > reach out to the OpenStack developers in that region and help them to >> > connect to the upstream community as well as answering questions or >> > other activities that will help. (sorry for the long sentence ... ) >> > >> > Rico and I already sign up for Wechat communication for sure :) >> >> Having had some experience with WeChat, I can't imagine I'd be very >> useful in a nova channel in WeChat since the majority of people in that >> group wouldn't be speaking English so I wouldn't be of much help, unless >> someone directly asked me a question in English. I realize the double >> standard here with expecting non-native English speakers to show up in >> the #openstack-nova freenode IRC channel to ask questions. It's >> definitely a hard problem when people simply can't speak the same >> language and I don't have a great solution. Probably the best common >> solution we have is having more people across time zones and language >> barriers engaging in more discussion in the mailing list (and Gerrit >> reviews of course). So maybe that means if you're in WeChat and someone >> is blocked or has a bigger question for a specific project team, >> encourage them to send an email to the dev ML - but that requires >> ambassadors to be in WeChat channels to make that suggestion. I think of >> this like working with product teams within your own company. Lots of >> those people aren't active upstream contributors and to avoid being the >> middleman (and thus bottleneck) for all communication between upstream >> and downstream teams, I've encouraged the downstream folk to send an >> email upstream to start a discussion. >> >> -- >> >> Thanks, >> >> Matt >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards Fred Li (李永乐) -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Sun Sep 16 09:01:26 2018 From: eumel at arcor.de (Frank Kloeker) Date: Sun, 16 Sep 2018 11:01:26 +0200 Subject: [openstack-dev] [I18n] Stein translation plans and prio Message-ID: Hello, the PTG is just over and now we are at the beginning of the work for Stein cycle. First of all let me thank all supporters for the I18n team and all individuals who help to present OpenStack in different languages. We have made so much progress since the formation of the team. What are the workflows now for the Stein cycle: 1. Project documentation translation We have now the right Zuul jobs in the flow to pull and push documentation from project repos to Zanata and back. At the moment there are 3 early-bird projects on the list: Horizon, OpenStack Ansible, and OpenStack Helm. Expanding this list will be the main work in the cycle. 2. Whitepaper translation Early this year we started with translation of Edge Computing Whitepaper. After solving issues with publishing translated content we have here now also a flow from the origin document through translation platform and back. Container whitepaper is the second project in this space. 3. Stein Dashboard translation All translation strings from stable/rocky branch are merged back to master and we work ahead on the master branch. There are 22 projects on that list. All plans and prios are linked on the frontpage of translation platform https://translate.openstack.org Keep the stones rolling on the Stein cycle. kind regards Frank (PTL I18n) From jean-philippe at evrard.me Sun Sep 16 12:14:41 2018 From: jean-philippe at evrard.me (Jean-philippe Evrard) Date: Sun, 16 Sep 2018 14:14:41 +0200 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: <20180914204756.o5umojwxvypskwti@yuggoth.org> References: <20180914204756.o5umojwxvypskwti@yuggoth.org> Message-ID: > [...] > > I respect that tool choices can make a difference in enabling or > improving our outreach to specific cultures. I agree there. > I'll commit to > personally rejecting presence on proprietary social media services > so as to demonstrate that public work can be done within our > community while relying exclusively on free/libre open source > software. It is nice to be in a position to do so. Please don't change! : ) > I recognize the existence of the free software movement as > a distinct culture with whom we could do a better job of connecting. > If as a community we promote and embrace non-free tools we will only > continue to alienate them, [...] I agree again with Jeremy here. As this was a direct question for the candidates, here is my answer... There is two layers in this conversation: a personal level, and an official stance on the subject (as discussed in the TC room). At a personal level,I guess I wouldn't mind myself joining to Wechat, with the hope of being helpful there. As I don't speak this language particularily, I am not sure how I can be more of help there than I can be with speaking an ambassador in a mutually common language (I am also not native english). At the same time, I would be very sad to not use an open tool, because I am not sure what the privacy implications would be. But, pragmatically, I understand the biggest picture here: We want to be more reachable, as increasing community size over time is a must for sustainable software, and if I can be a little help personally, I'd do it. Before giving my opinion for an official stance as a TC candidate (the other layer), I'd like to ask you a few questions ... - What is the problem joining Wechat will solve (keeping in mind the language barrier)? - Isn't this problem already solved for other languages with existing initiatives like local ambassadors and i18n team? Why aren't these relevant? - Should we widen this 'Wechat' initiative to all systems? - Pardon my ignorance here, what is the problem with email? (I understand some chat systems might be blocked, I thought emails would be fine, and the lowest common denominator). I also have technical questions about 'wechat' (like how do you use it without a smartphone?) and the relevance of tools we currently use, but this will open Pandora's box, and I'd rather not spend my energy on closing that box right now :D Best regards, Jean-Philippe Evrard (evrardjp) From zhipengh512 at gmail.com Sun Sep 16 14:28:13 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sun, 16 Sep 2018 07:28:13 -0700 Subject: [openstack-dev] [tc][uc]Community Wide Long Term Goals In-Reply-To: References: Message-ID: Just a quick update, the execution part of the proposal has been added in patch-2 , so if you have the similar concern shared in Matt's open letter , please help review and comment. On Fri, Sep 14, 2018, 5:51 PM Zhipeng Huang wrote: > Hi, > > Based upon the discussion we had at the TC session in the afternoon, I'm > starting to draft a patch to add long term goal mechanism into governance. > It is by no means a complete solution at the moment (still have not thought > through the execution method yet to make sure the outcome), but feel free > to provide your feedback at https://review.openstack.org/#/c/602799/ . > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Sun Sep 16 15:33:47 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sun, 16 Sep 2018 10:33:47 -0500 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: On 9/15/2018 9:50 PM, Fred Li wrote: > As a non-native English speaker, it is nice-to-have that some TC or BoD > can stay in the local social media, like wechat group in China. But it > is also very difficult for non-native Chinese speakers to stay find > useful information in ton of Chinese chats. > My thoughts (even I am not a TC candidate) on this is, > 1. it is kind of you to stay in the local group. > 2. if we know that you are in, we will say English if we want you to notice. > 3. since there is local OpenStack operation manager, hope he/she can > identify some information and help to translate, or remind them to > translate. > > My one cent. Is there a generic openstack group on wechat? Does one have to be invited to it? Is there a specific openstack/nova group on wechat? I'm on wechat anyway so I don't mind being in those groups if someone wants to reach out. -- Thanks, Matt From soulxu at gmail.com Sun Sep 16 20:57:26 2018 From: soulxu at gmail.com (Alex Xu) Date: Mon, 17 Sep 2018 04:57:26 +0800 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: I'm happy to be the translator or forwarder for the nova issue if you guys need(although, the nova team isn't happy with me now, also i see it is not to my personal. I guess they won't be make me hard for other work I do.). I can see there are a lot of Chinese operators/users complain some issues, but they never send their feedback to the mail-list, this may due to the language, or people don't know the OpenSource culture in the China.(To be host, the OpenStack is first project, let a lot of developers to understand what is OpenSource, and how it is works. In the before, since the linux kernel is hard, really only few people in the China experience OpenSource). Matt Riedemann 于2018年9月16日周日 下午11:34写道: > On 9/15/2018 9:50 PM, Fred Li wrote: > > As a non-native English speaker, it is nice-to-have that some TC or BoD > > can stay in the local social media, like wechat group in China. But it > > is also very difficult for non-native Chinese speakers to stay find > > useful information in ton of Chinese chats. > > My thoughts (even I am not a TC candidate) on this is, > > 1. it is kind of you to stay in the local group. > > 2. if we know that you are in, we will say English if we want you to > notice. > > 3. since there is local OpenStack operation manager, hope he/she can > > identify some information and help to translate, or remind them to > > translate. > > > > My one cent. > > Is there a generic openstack group on wechat? Does one have to be > invited to it? Is there a specific openstack/nova group on wechat? I'm > on wechat anyway so I don't mind being in those groups if someone wants > to reach out. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Sun Sep 16 22:06:02 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 17 Sep 2018 06:06:02 +0800 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: Great to see the momentum going ! :) Another problem is that many people doesn't follow upstream so they are oblivious about the new features and cool things had been done in every cycle, and then all these types of half ass openstack trashing blog post got shared in wechat moments dissing how openstack 2015 didn't help to solve their 2018 problems.... Glad to have Alex and Matt sign up on the Nova side :) On Mon, Sep 17, 2018, 4:57 AM Alex Xu wrote: > I'm happy to be the translator or forwarder for the nova issue if you guys > need(although, the nova team isn't happy with me now, also i see it is not > to my personal. I guess they won't be make me hard for other work I do.). I > can see there are a lot of Chinese operators/users complain some issues, > but they never send their feedback to the mail-list, this may due to the > language, or people don't know the OpenSource culture in the China.(To be > host, the OpenStack is first project, let a lot of developers to understand > what is OpenSource, and how it is works. In the before, since the linux > kernel is hard, really only few people in the China experience OpenSource). > > > > > Matt Riedemann 于2018年9月16日周日 下午11:34写道: > >> On 9/15/2018 9:50 PM, Fred Li wrote: >> > As a non-native English speaker, it is nice-to-have that some TC or BoD >> > can stay in the local social media, like wechat group in China. But it >> > is also very difficult for non-native Chinese speakers to stay find >> > useful information in ton of Chinese chats. >> > My thoughts (even I am not a TC candidate) on this is, >> > 1. it is kind of you to stay in the local group. >> > 2. if we know that you are in, we will say English if we want you to >> notice. >> > 3. since there is local OpenStack operation manager, hope he/she can >> > identify some information and help to translate, or remind them to >> > translate. >> > >> > My one cent. >> >> Is there a generic openstack group on wechat? Does one have to be >> invited to it? Is there a specific openstack/nova group on wechat? I'm >> on wechat anyway so I don't mind being in those groups if someone wants >> to reach out. >> >> -- >> >> Thanks, >> >> Matt >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Sun Sep 16 22:08:55 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sun, 16 Sep 2018 18:08:55 -0400 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: <73D3C57C-A635-41BC-82B0-6B3E6CC6F003@vexxhost.com> Sign me up too :) Sent from my iPhone > On Sep 16, 2018, at 6:06 PM, Zhipeng Huang wrote: > > Great to see the momentum going ! :) > > Another problem is that many people doesn't follow upstream so they are oblivious about the new features and cool things had been done in every cycle, and then all these types of half ass openstack trashing blog post got shared in wechat moments dissing how openstack 2015 didn't help to solve their 2018 problems.... > > Glad to have Alex and Matt sign up on the Nova side :) > >> On Mon, Sep 17, 2018, 4:57 AM Alex Xu wrote: >> I'm happy to be the translator or forwarder for the nova issue if you guys need(although, the nova team isn't happy with me now, also i see it is not to my personal. I guess they won't be make me hard for other work I do.). I can see there are a lot of Chinese operators/users complain some issues, but they never send their feedback to the mail-list, this may due to the language, or people don't know the OpenSource culture in the China.(To be host, the OpenStack is first project, let a lot of developers to understand what is OpenSource, and how it is works. In the before, since the linux kernel is hard, really only few people in the China experience OpenSource). >> >> >> >> >> Matt Riedemann 于2018年9月16日周日 下午11:34写道: >>> On 9/15/2018 9:50 PM, Fred Li wrote: >>> > As a non-native English speaker, it is nice-to-have that some TC or BoD >>> > can stay in the local social media, like wechat group in China. But it >>> > is also very difficult for non-native Chinese speakers to stay find >>> > useful information in ton of Chinese chats. >>> > My thoughts (even I am not a TC candidate) on this is, >>> > 1. it is kind of you to stay in the local group. >>> > 2. if we know that you are in, we will say English if we want you to notice. >>> > 3. since there is local OpenStack operation manager, hope he/she can >>> > identify some information and help to translate, or remind them to >>> > translate. >>> > >>> > My one cent. >>> >>> Is there a generic openstack group on wechat? Does one have to be >>> invited to it? Is there a specific openstack/nova group on wechat? I'm >>> on wechat anyway so I don't mind being in those groups if someone wants >>> to reach out. >>> >>> -- >>> >>> Thanks, >>> >>> Matt >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongle.li at gmail.com Mon Sep 17 00:24:41 2018 From: yongle.li at gmail.com (Fred Li) Date: Mon, 17 Sep 2018 08:24:41 +0800 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: There are many wechat groups about OpenStack, some of them are regional (like southern east China, Beijing, Xi'an group), some of them are event oriented, and some are for others. Yes, you need to be invited, which is not convenient. So far as I know there is not nova group, or maybe Alex knows. Thanks, I will invite you to 1 or 2 active groups. On Sun, Sep 16, 2018 at 11:33 PM, Matt Riedemann wrote: > On 9/15/2018 9:50 PM, Fred Li wrote: > >> As a non-native English speaker, it is nice-to-have that some TC or BoD >> can stay in the local social media, like wechat group in China. But it is >> also very difficult for non-native Chinese speakers to stay find useful >> information in ton of Chinese chats. >> My thoughts (even I am not a TC candidate) on this is, >> 1. it is kind of you to stay in the local group. >> 2. if we know that you are in, we will say English if we want you to >> notice. >> 3. since there is local OpenStack operation manager, hope he/she can >> identify some information and help to translate, or remind them to >> translate. >> >> My one cent. >> > > Is there a generic openstack group on wechat? Does one have to be invited > to it? Is there a specific openstack/nova group on wechat? I'm on wechat > anyway so I don't mind being in those groups if someone wants to reach out. > > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards Fred Li (李永乐) -------------- next part -------------- An HTML attachment was scrubbed... URL: From naichuan.sun at citrix.com Mon Sep 17 01:28:23 2018 From: naichuan.sun at citrix.com (Naichuan Sun) Date: Mon, 17 Sep 2018 01:28:23 +0000 Subject: [openstack-dev] About microversion setting to enable nested resource provider In-Reply-To: References: <0e33fb6ca6484035bee76197f36b9aae@SINPEX02CL01.citrite.net> <7e200b01-4f83-95b4-8efa-8b4897c39da5@gmail.com> <90a534cec8ff4957a141af2ed1686934@SINPEX02CL01.citrite.net> <0acdc7e5-432f-fc99-4ce2-c9df53af1a3b@fried.cc> <9eba70d4c66a435792fe2c9c3ba596d4@SINPEX02CL01.citrite.net> Message-ID: <3d510d1b7cc64a7ab32a55f9c91c1548@SINPEX02CL01.citrite.net> Hi, Sylvain, In truth I’m worrying about the old root rp which include the vgpu inventory. There is no field in the inventory which can display which GPU/GPUG it belong to, right? Anyway, will discuss it after you come back. Thank very much. BR. Naichuan Sun From: Sylvain Bauza [mailto:sbauza at redhat.com] Sent: Friday, September 14, 2018 9:34 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] About microversion setting to enable nested resource provider Le jeu. 13 sept. 2018 à 19:29, Naichuan Sun > a écrit : Hi, Sylvain, Thank you very much for the information. It is pity that I can’t attend the meeting. I have a concern about reshaper in multi-type vgpu support. In the old vgpu support, we only have one vgpu inventory in root resource provider, which means we only support one vgpu type. When do reshape, placement will send allocations(which include just one vgpu resource allocation information) to the driver, if the host have more than one pgpu/pgpug(which support different vgpu type), how do we know which pgpu/pgpug own the allocation information? Do we need to communicate with hypervisor the confirm that? The reshape will actually move the existing allocations for a VGPU resource class to the inventory for this class that is on the child resource provider now with the reshape. Since we agreed on keeping consistent naming, there is no need to guess which is which. That said, you raise a point that was discussed during the PTG and we all agreed there was an upgrade impact as multiple vGPUs shouldn't be allowed until the reshape is done. Accordingly, see my spec I reproposed for Stein which describes the upgrade impact https://review.openstack.org/#/c/602474/ Since I'm at the PTG, we have huge time difference between you and me, but we can discuss on that point next week when I'm back (my mornings match then your afternoons) -Sylvain Thank you very much. BR. Naichuan Sun From: Sylvain Bauza [mailto:sbauza at redhat.com] Sent: Thursday, September 13, 2018 11:47 PM To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] About microversion setting to enable nested resource provider Hey Naichuan, FWIW, we discussed on the missing pieces for nested resource providers. See the (currently-in-use) etherpad https://etherpad.openstack.org/p/nova-ptg-stein and lookup for "closing the gap on nested resource providers" (L144 while I speak) The fact that we are not able to schedule yet is a critical piece that we said we're going to work on it as soon as we can. -Sylvain On Thu, Sep 13, 2018 at 9:14 AM, Eric Fried > wrote: There's a patch series in progress for this: https://review.openstack.org/#/q/topic:use-nested-allocation-candidates It needs some TLC. I'm sure gibi and tetsuro would welcome some help... efried On 09/13/2018 08:31 AM, Naichuan Sun wrote: > Thank you very much, Jay. > Is there somewhere I could set microversion(some configure file?), Or just modify the source code to set it? > > BR. > Naichuan Sun > > -----Original Message----- > From: Jay Pipes [mailto:jaypipes at gmail.com] > Sent: Thursday, September 13, 2018 9:19 PM > To: Naichuan Sun >; OpenStack Development Mailing List (not for usage questions) > > Cc: melanie witt >; efried at us.ibm.com; Sylvain Bauza > > Subject: Re: About microversion setting to enable nested resource provider > > On 09/13/2018 06:39 AM, Naichuan Sun wrote: >> Hi, guys, >> >> Looks n-rp is disabled by default because microversion matches 1.29 : >> https://github.com/openstack/nova/blob/master/nova/api/openstack/place >> ment/handlers/allocation_candidate.py#L252 >> >> Anyone know how to set the microversion to enable n-rp in placement? > > It is the client which must send the 1.29+ placement API microversion header to indicate to the placement API server that the client wants to receive nested provider information in the allocation candidates response. > > Currently, nova-scheduler calls the scheduler reportclient's > get_allocation_candidates() method: > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/manager.py#L138 > > The scheduler reportclient's get_allocation_candidates() method currently passes the 1.25 placement API microversion header: > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L353 > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L53 > > In order to get the nested information returned in the allocation candidates response, that would need to be upped to 1.29. > > Best, > -jay > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Mon Sep 17 02:14:50 2018 From: soulxu at gmail.com (Alex Xu) Date: Mon, 17 Sep 2018 10:14:50 +0800 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: Fred Li 于2018年9月17日周一 上午8:25写道: > There are many wechat groups about OpenStack, some of them are regional > (like southern east China, Beijing, Xi'an group), some of them are event > oriented, and some are for others. Yes, you need to be invited, which is > not convenient. So far as I know there is not nova group, or maybe Alex > knows. > No, I don't have any nova group. > Thanks, I will invite you to 1 or 2 active groups. > > On Sun, Sep 16, 2018 at 11:33 PM, Matt Riedemann > wrote: > >> On 9/15/2018 9:50 PM, Fred Li wrote: >> >>> As a non-native English speaker, it is nice-to-have that some TC or BoD >>> can stay in the local social media, like wechat group in China. But it is >>> also very difficult for non-native Chinese speakers to stay find useful >>> information in ton of Chinese chats. >>> My thoughts (even I am not a TC candidate) on this is, >>> 1. it is kind of you to stay in the local group. >>> 2. if we know that you are in, we will say English if we want you to >>> notice. >>> 3. since there is local OpenStack operation manager, hope he/she can >>> identify some information and help to translate, or remind them to >>> translate. >>> >>> My one cent. >>> >> >> Is there a generic openstack group on wechat? Does one have to be invited >> to it? Is there a specific openstack/nova group on wechat? I'm on wechat >> anyway so I don't mind being in those groups if someone wants to reach out. >> >> >> -- >> >> Thanks, >> >> Matt >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Regards > Fred Li (李永乐) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Mon Sep 17 02:47:07 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Sun, 16 Sep 2018 20:47:07 -0600 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: Message-ID: If we consider dropping "os", should we entertain dropping "api", too? Do we have a good reason to keep "api"? I wouldn't be opposed to simple service types (e.g "compute" or "loadbalancer"). On Sat, Sep 15, 2018 at 9:01 AM Morgan Fainberg wrote: > I am generally opposed to needlessly prefixing things with "os". > > I would advocate to drop it. > > > On Fri, Sep 14, 2018, 20:17 Lance Bragstad wrote: > >> Ok - yeah, I'm not sure what the history behind that is either... >> >> I'm mainly curious if that's something we can/should keep or if we are >> opposed to dropping 'os' and 'api' from the convention (e.g. >> load-balancer:loadbalancer:post as opposed to >> os_load-balancer_api:loadbalancer:post) and just sticking with the >> service-type? >> >> On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson >> wrote: >> >>> I don't know for sure, but I assume it is short for "OpenStack" and >>> prefixing OpenStack policies vs. third party plugin policies for >>> documentation purposes. >>> >>> I am guilty of borrowing this from existing code examples[0]. >>> >>> [0] >>> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html >>> >>> Michael >>> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad >>> wrote: >>> > >>> > >>> > >>> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson >>> wrote: >>> >> >>> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post" >>> >> which maps to the "os--api::" format. >>> > >>> > >>> > Thanks for explaining the justification, Michael. >>> > >>> > I'm curious if anyone has context on the "os-" part of the format? >>> I've seen that pattern in a couple different projects. Does anyone know >>> about its origin? Was it something we converted to our policy names because >>> of API names/paths? >>> > >>> >> >>> >> >>> >> I selected it as it uses the service-type[1], references the API >>> >> resource, and then the method. So it maps well to the API reference[2] >>> >> for the service. >>> >> >>> >> [0] >>> https://docs.openstack.org/octavia/latest/configuration/policy.html >>> >> [1] https://service-types.openstack.org/ >>> >> [2] >>> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer >>> >> >>> >> Michael >>> >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell wrote: >>> >> > >>> >> > So +1 >>> >> > >>> >> > >>> >> > >>> >> > Tim >>> >> > >>> >> > >>> >> > >>> >> > From: Lance Bragstad >>> >> > Reply-To: "OpenStack Development Mailing List (not for usage >>> questions)" >>> >> > Date: Wednesday, 12 September 2018 at 20:43 >>> >> > To: "OpenStack Development Mailing List (not for usage questions)" < >>> openstack-dev at lists.openstack.org>, OpenStack Operators < >>> openstack-operators at lists.openstack.org> >>> >> > Subject: [openstack-dev] [all] Consistent policy names >>> >> > >>> >> > >>> >> > >>> >> > The topic of having consistent policy names has popped up a few >>> times this week. Ultimately, if we are to move forward with this, we'll >>> need a convention. To help with that a little bit I started an etherpad [0] >>> that includes links to policy references, basic conventions *within* that >>> service, and some examples of each. I got through quite a few projects this >>> morning, but there are still a couple left. >>> >> > >>> >> > >>> >> > >>> >> > The idea is to look at what we do today and see what conventions we >>> can come up with to move towards, which should also help us determine how >>> much each convention is going to impact services (e.g. picking a convention >>> that will cause 70% of services to rename policies). >>> >> > >>> >> > >>> >> > >>> >> > Please have a look and we can discuss conventions in this thread. >>> If we come to agreement, I'll start working on some documentation in >>> oslo.policy so that it's somewhat official because starting to renaming >>> policies. >>> >> > >>> >> > >>> >> > >>> >> > [0] https://etherpad.openstack.org/p/consistent-policy-names >>> >> > >>> >> > _______________________________________________ >>> >> > OpenStack-operators mailing list >>> >> > OpenStack-operators at lists.openstack.org >>> >> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> >>> >> >>> __________________________________________________________________________ >>> >> OpenStack Development Mailing List (not for usage questions) >>> >> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> > _______________________________________________ >>> > OpenStack-operators mailing list >>> > OpenStack-operators at lists.openstack.org >>> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Mon Sep 17 09:28:15 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Mon, 17 Sep 2018 10:28:15 +0100 Subject: [openstack-dev] [puppet] [placement] In-Reply-To: References: Message-ID: <20180917092815.zjs5gbewqm2lytjp@lyarwood.usersys.redhat.com> On 15-09-18 11:42:37, Emilien Macchi wrote: > I'm currently taking care of creating puppet-placement: > https://review.openstack.org/#/c/602870/ > https://review.openstack.org/#/c/602871/ > https://review.openstack.org/#/c/602869/ > > Once these merge, we'll use cookiecutter, and move things from puppet-nova. > We'll also find a way to call puppet-placement from nova::placement class, > eventually. > Hopefully we can make the switch to new placement during Stein! Thanks Emilien, FWIW I've also started work on the RDO packaging front [1] and would be happy to help with this puppet extraction. Cheers, Lee [1] https://gitlab.com/lyarwood/placement-distgit -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From tobias.urdin at binero.se Mon Sep 17 09:32:08 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 17 Sep 2018 11:32:08 +0200 Subject: [openstack-dev] [puppet] [placement] In-Reply-To: References: Message-ID: <846a1969-0ead-d8e0-33e3-528ddf6009b1@binero.se> Sounds great, thanks for pushing this Emilien! Best regards Tobias On 09/15/2018 07:48 PM, Emilien Macchi wrote: > I'm currently taking care of creating puppet-placement: > https://review.openstack.org/#/c/602870/ > https://review.openstack.org/#/c/602871/ > https://review.openstack.org/#/c/602869/ > > Once these merge, we'll use cookiecutter, and move things from > puppet-nova. We'll also find a way to call puppet-placement from > nova::placement class, eventually. > Hopefully we can make the switch to new placement during Stein! > > Thanks, > -- > Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Mon Sep 17 11:23:06 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 17 Sep 2018 13:23:06 +0200 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possibledeprecation of Nova's legacy notification interface In-Reply-To: <1535466670.23583.3@smtp.office365.com> References: <1533807698.26377.7@smtp.office365.com> <1535466670.23583.3@smtp.office365.com> Message-ID: <1537183386.22188.5@smtp.office365.com> Hi, On the Stein PTG the nova team agreed to deprecate the legacy, unversioned notification interface of nova. We also agreed that we will not try to remove the legacy notification sending from the code any time soon. So this deprecation means the following: * by default configuration nova will only emit versioned notifications, but the unversioned notifications still can be turned on in the configuration. * nova will not maintain the legacy notification code path further, so it can break I pushed the deprecation patch [2] but it will only be merged after the remaining versioned notification transformation patches [3] are merged. Cheers, gibi [2] https://review.openstack.org/#/c/603079 [3] https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-stein On Tue, Aug 28, 2018 at 4:31 PM, Balázs Gibizer wrote: > Thanks for all the responses. I collected them on the nova ptg > discussion etherpad [1] (L186 at the moment). The nova team will talk > about deprecation of the legacy interface on Friday on the PTG. If > you want participate in the discussion but you are not planning to > sit in the nova room whole day then let me know and I will try to > ping you over IRC when we about to start the item. > > Cheers, > gibi > > [1] https://etherpad.openstack.org/p/nova-ptg-stein > > On Thu, Aug 9, 2018 at 11:41 AM, Balázs Gibizer > wrote: >> Dear Nova notification consumers! >> >> >> The Nova team made progress with the new versioned notification >> interface [1] and it is almost reached feature parity [2] with the >> legacy, unversioned one. So Nova team will discuss on the upcoming >> PTG the deprecation of the legacy interface. There is a list of >> projects (we know of) consuming the legacy interface and we would >> like to know if any of these projects plan to switch over to the >> new interface in the foreseeable future so we can make a well >> informed decision about the deprecation. >> >> >> * Searchlight [3] - it is in maintenance mode so I guess the answer >> is no >> * Designate [4] >> * Telemetry [5] >> * Mistral [6] >> * Blazar [7] >> * Watcher [8] - it seems Watcher uses both legacy and versioned nova >> notifications >> * Masakari - I'm not sure Masakari depends on nova notifications or >> not >> >> Cheers, >> gibi >> >> [1] >> https://docs.openstack.org/nova/latest/reference/notifications.html >> [2] http://burndown.peermore.com/nova-notification/ >> >> [3] >> https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py >> [4] >> https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py >> [5] >> https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L2 >> [6] >> https://github.com/openstack/mistral/blob/master/etc/event_definitions.yml.sample#L2 >> [7] >> https://github.com/openstack/blazar/blob/5526ed1f9b74d23b5881a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst >> [8] >> https://github.com/openstack/watcher/blob/master/watcher/decision_engine/model/notification/nova.py#L335 >> >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dangtrinhnt at gmail.com Mon Sep 17 11:35:29 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Mon, 17 Sep 2018 20:35:29 +0900 Subject: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possibledeprecation of Nova's legacy notification interface In-Reply-To: <1537183386.22188.5@smtp.office365.com> References: <1533807698.26377.7@smtp.office365.com> <1535466670.23583.3@smtp.office365.com> <1537183386.22188.5@smtp.office365.com> Message-ID: Hi gibi, Thank for the info. Searchlight team is working on a patch to support the versioned Nova. Hopefully, it will be released in Stein-1. Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Mon, Sep 17, 2018 at 8:23 PM Balázs Gibizer wrote: > Hi, > > On the Stein PTG the nova team agreed to deprecate the legacy, > unversioned notification interface of nova. We also agreed that we will > not try to remove the legacy notification sending from the code any > time soon. So this deprecation means the following: > * by default configuration nova will only emit versioned notifications, > but the unversioned notifications still can be turned on in the > configuration. > * nova will not maintain the legacy notification code path further, so > it can break > > I pushed the deprecation patch [2] but it will only be merged after the > remaining versioned notification transformation patches [3] are merged. > > Cheers, > gibi > > [2] https://review.openstack.org/#/c/603079 > [3] > > https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-stein > > On Tue, Aug 28, 2018 at 4:31 PM, Balázs Gibizer > wrote: > > Thanks for all the responses. I collected them on the nova ptg > > discussion etherpad [1] (L186 at the moment). The nova team will talk > > about deprecation of the legacy interface on Friday on the PTG. If > > you want participate in the discussion but you are not planning to > > sit in the nova room whole day then let me know and I will try to > > ping you over IRC when we about to start the item. > > > > Cheers, > > gibi > > > > [1] https://etherpad.openstack.org/p/nova-ptg-stein > > > > On Thu, Aug 9, 2018 at 11:41 AM, Balázs Gibizer > > wrote: > >> Dear Nova notification consumers! > >> > >> > >> The Nova team made progress with the new versioned notification > >> interface [1] and it is almost reached feature parity [2] with the > >> legacy, unversioned one. So Nova team will discuss on the upcoming > >> PTG the deprecation of the legacy interface. There is a list of > >> projects (we know of) consuming the legacy interface and we would > >> like to know if any of these projects plan to switch over to the > >> new interface in the foreseeable future so we can make a well > >> informed decision about the deprecation. > >> > >> > >> * Searchlight [3] - it is in maintenance mode so I guess the answer > >> is no > >> * Designate [4] > >> * Telemetry [5] > >> * Mistral [6] > >> * Blazar [7] > >> * Watcher [8] - it seems Watcher uses both legacy and versioned nova > >> notifications > >> * Masakari - I'm not sure Masakari depends on nova notifications or > >> not > >> > >> Cheers, > >> gibi > >> > >> [1] > >> https://docs.openstack.org/nova/latest/reference/notifications.html > >> [2] http://burndown.peermore.com/nova-notification/ > >> > >> [3] > >> > https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py > >> [4] > >> > https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py > >> [5] > >> > https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L2 > >> [6] > >> > https://github.com/openstack/mistral/blob/master/etc/event_definitions.yml.sample#L2 > >> [7] > >> > https://github.com/openstack/blazar/blob/5526ed1f9b74d23b5881a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst > >> [8] > >> > https://github.com/openstack/watcher/blob/master/watcher/decision_engine/model/notification/nova.py#L335 > >> > >> > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Mon Sep 17 12:10:52 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Mon, 17 Sep 2018 14:10:52 +0200 Subject: [openstack-dev] [kolla] Committing proprietary plugins to OpenStack In-Reply-To: <1536931205555.60413@cisco.com> References: <6ece4952-6ea2-70e6-2b7d-3c2d4dbe8287@suse.com> <1536931205555.60413@cisco.com> Message-ID: Hi Shyam, We offer methods [0] to allow users add their plugins during build time through template overrides. This allows extends dockerfiles with custom content as may be your use case. Also we offer a method to add an external directory into the buildable list of Dockerfiles with ``--config-dir /mydockerfiles/`` With this two methods you can provide your users the images or the mechanism to customize their docker images. In kolla's repository there is a contrib/ folder [1] for template-overrides, not tested in CI nor supported by the team. Only examples of things that may work or not about how to extend dockerfiles for specific use cases. As example, a patch under review to add overrides for cisco plugins [2] Regards [0] https://docs.openstack.org/kolla/latest/admin/image-building.html#dockerfile-customisation [1] https://github.com/openstack/kolla/tree/master/contrib/template-override [2] https://review.openstack.org/#/c/552119/ El vie., 14 sept. 2018 a las 15:20, Steven Dake (stdake) () escribió: > ​Shyam, > > > Our policy, decided long ago, is that we would work with third party > components (such as plugins) for nova, cinder, neutron, horizon, etc that > were proprietary as long as the code that merges into Kolla specifically is > ASL2. > > > What is your plugin for? if its for nova, cinder, neutron, horizon, it is > covered by this policy pretty much wholesale. If its a different type of > system, some debate may be warranted by the core team. > > > Cheers > > -steve > ------------------------------ > *From:* Shyam Biradar > *Sent:* Wednesday, September 12, 2018 5:01 AM > *To:* Andreas Jaeger > *Cc:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [kolla] Committing proprietary plugins to > OpenStack > > Yes Andreas, whatever deployment scripts we will be pushing it will be > under apache license. > > [image: logo] > *Shyam Biradar* * Software Engineer | DevOps* > M +91 8600266938 | shyam.biradar at trilio.io | trilio.io > > On Wed, Sep 12, 2018 at 5:24 PM, Andreas Jaeger wrote: > >> On 2018-09-12 13:21, Shyam Biradar wrote: >> >>> Hi, >>> >>> We have a proprietary openstack plugin. We want to commit deployment >>> scripts like containers and heat templates to upstream in tripleo and kolla >>> project but not actual product code. >>> >>> Is it possible? Or How can we handle this case? Any thoughts are welcome. >>> >> >> It's first a legal question - is everything you are pushing under the >> Apache license as the rest of the project that you push to? >> >> And then a policy of kolla project, so let me tag them >> >> Andreas >> -- >> Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi >> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nür >> nberg, >> Germany >> GF: Felix Imendörffer, Jane Smithard, Graham Norton, >> HRB 21284 (AG Nürnberg) >> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Sep 17 12:43:51 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 17 Sep 2018 08:43:51 -0400 Subject: [openstack-dev] About microversion setting to enable nested resource provider In-Reply-To: <3d510d1b7cc64a7ab32a55f9c91c1548@SINPEX02CL01.citrite.net> References: <0e33fb6ca6484035bee76197f36b9aae@SINPEX02CL01.citrite.net> <7e200b01-4f83-95b4-8efa-8b4897c39da5@gmail.com> <90a534cec8ff4957a141af2ed1686934@SINPEX02CL01.citrite.net> <0acdc7e5-432f-fc99-4ce2-c9df53af1a3b@fried.cc> <9eba70d4c66a435792fe2c9c3ba596d4@SINPEX02CL01.citrite.net> <3d510d1b7cc64a7ab32a55f9c91c1548@SINPEX02CL01.citrite.net> Message-ID: On 09/16/2018 09:28 PM, Naichuan Sun wrote: > Hi, Sylvain, > > In truth I’m worrying about the old root rp which include the vgpu > inventory. There is no field in the inventory which can display which > GPU/GPUG it belong to, right? Anyway,  will discuss it after you come back. As Sylvain mentions below, you will need to have some mechanism in the XenAPI virt driver which creates child resource providers under the existing root provider (which is the compute node resource provider). You will need to have the virt driver persist the mapping between your internal physical GPU group name and the UUID of the resource provider record that the virt driver creates for that PGPU group. So, for example, let's say you have two PGPU groups on the host. They are named PGPU_A and PGPU_B. The XenAPI virt driver will need to ask the ProviderTree object it receives in the update_provider_tree() virt driver method whether there is a resource provider named "PGPU_A" in the tree. If not, the virt driver needs to create a new child resource provider with the name "PGPU_A" with a parent provider pointing to the root compute node provider. The ProviderTree.new_child() method is used to create new child providers: https://github.com/openstack/nova/blob/82270cc261f6c1d9d2cc386f1fb445dd66023f75/nova/compute/provider_tree.py#L411 Hope that makes sense, -jay > Thank very much. > > BR. > > Naichuan Sun > > *From:*Sylvain Bauza [mailto:sbauza at redhat.com] > *Sent:* Friday, September 14, 2018 9:34 PM > *To:* OpenStack Development Mailing List (not for usage questions) > > *Subject:* Re: [openstack-dev] About microversion setting to enable > nested resource provider > > Le jeu. 13 sept. 2018 à 19:29, Naichuan Sun > a écrit : > > Hi, Sylvain, > > Thank you very much for the information. It is pity that I can’t > attend the meeting. > > I have a concern about reshaper in multi-type vgpu support. > > In the old vgpu support, we only have one vgpu inventory in root > resource provider, which means we only support one vgpu type. When > do reshape, placement will send allocations(which include just one > vgpu resource allocation information) to the driver, if the host > have more than one pgpu/pgpug(which support different vgpu type), > how do we know which pgpu/pgpug own the allocation information? Do > we need to communicate with hypervisor the confirm that? > > The reshape will actually move the existing allocations for a VGPU > resource class to the inventory for this class that is on the child > resource provider now with the reshape. > > Since we agreed on keeping consistent naming, there is no need to guess > which is which. That said, you raise a point that was discussed during > the PTG and we all agreed there was an upgrade impact as multiple vGPUs > shouldn't be allowed until the reshape is done. > > Accordingly, see my spec I reproposed for Stein which describes the > upgrade impact https://review.openstack.org/#/c/602474/ > > Since I'm at the PTG, we have huge time difference between you and me, > but we can discuss on that point next week when I'm back (my mornings > match then your afternoons) > > -Sylvain > > Thank you very much. > > BR. > > Naichuan Sun > > *From:*Sylvain Bauza [mailto:sbauza at redhat.com > ] > *Sent:* Thursday, September 13, 2018 11:47 PM > *To:* OpenStack Development Mailing List (not for usage questions) > > > *Subject:* Re: [openstack-dev] About microversion setting to enable > nested resource provider > > Hey Naichuan, > > FWIW, we discussed on the missing pieces for nested resource > providers. See the (currently-in-use) etherpad > https://etherpad.openstack.org/p/nova-ptg-stein and lookup for > "closing the gap on nested resource providers" (L144 while I speak) > > The fact that we are not able to schedule yet is a critical piece > that we said we're going to work on it as soon as we can. > > -Sylvain > > On Thu, Sep 13, 2018 at 9:14 AM, Eric Fried > wrote: > > There's a patch series in progress for this: > > https://review.openstack.org/#/q/topic:use-nested-allocation-candidates > > It needs some TLC. I'm sure gibi and tetsuro would welcome some > help... > > efried > > > On 09/13/2018 08:31 AM, Naichuan Sun wrote: > > Thank you very much, Jay. > > Is there somewhere I could set microversion(some configure > file?), Or just modify the source code to set it? > > > > BR. > > Naichuan Sun > > > > -----Original Message----- > > From: Jay Pipes [mailto:jaypipes at gmail.com > ] > > Sent: Thursday, September 13, 2018 9:19 PM > > To: Naichuan Sun >; OpenStack Development Mailing > List (not for usage questions) > > > > Cc: melanie witt >; efried at us.ibm.com > ; Sylvain Bauza > > > Subject: Re: About microversion setting to enable nested > resource provider > > > > On 09/13/2018 06:39 AM, Naichuan Sun wrote: > >> Hi, guys, > >> > >> Looks n-rp is disabled by default because microversion > matches 1.29 : > >> > https://github.com/openstack/nova/blob/master/nova/api/openstack/place > >> ment/handlers/allocation_candidate.py#L252 > >> > >> Anyone know how to set the microversion to enable n-rp in > placement? > > > > It is the client which must send the 1.29+ placement API > microversion header to indicate to the placement API server that > the client wants to receive nested provider information in the > allocation candidates response. > > > > Currently, nova-scheduler calls the scheduler reportclient's > > get_allocation_candidates() method: > > > > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/manager.py#L138 > > > > The scheduler reportclient's get_allocation_candidates() > method currently passes the 1.25 placement API microversion header: > > > > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L353 > > > > > https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L53 > > > > In order to get the nested information returned in the > allocation candidates response, that would need to be upped to 1.29. > > > > Best, > > -jay > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Mon Sep 17 12:48:01 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 17 Sep 2018 08:48:01 -0400 Subject: [openstack-dev] [puppet] [placement] In-Reply-To: <20180917092815.zjs5gbewqm2lytjp@lyarwood.usersys.redhat.com> References: <20180917092815.zjs5gbewqm2lytjp@lyarwood.usersys.redhat.com> Message-ID: On Mon, Sep 17, 2018 at 5:29 AM Lee Yarwood wrote: > FWIW I've also started work on the RDO packaging front [1] and would be > happy to help with this puppet extraction. > Good to know, thanks. Once we have the repo in place, here is a plan proposal: * Populate the repo with cookiecutter & adjust to Placement service * cp code from nova::placement (and nova::wsgi::apache_placement) * package placement and puppet-placement in RDO * start testing puppet-placement in puppet-openstack-integration * switch tripleo-common / THT to deploy placement in nova_placement container * switch tripleo to use puppet-placement (in THT) * probably rename nova_placement container/service into placement or something generic Feedback is welcome, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon Sep 17 12:56:12 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 17 Sep 2018 06:56:12 -0600 Subject: [openstack-dev] About microversion setting to enable nested resource provider In-Reply-To: References: <0e33fb6ca6484035bee76197f36b9aae@SINPEX02CL01.citrite.net> <7e200b01-4f83-95b4-8efa-8b4897c39da5@gmail.com> <90a534cec8ff4957a141af2ed1686934@SINPEX02CL01.citrite.net> <0acdc7e5-432f-fc99-4ce2-c9df53af1a3b@fried.cc> <9eba70d4c66a435792fe2c9c3ba596d4@SINPEX02CL01.citrite.net> <3d510d1b7cc64a7ab32a55f9c91c1548@SINPEX02CL01.citrite.net> Message-ID: On Mon, Sep 17, 2018 at 6:43 AM, Jay Pipes wrote: > On 09/16/2018 09:28 PM, Naichuan Sun wrote: > >> Hi, Sylvain, >> >> In truth I’m worrying about the old root rp which include the vgpu >> inventory. There is no field in the inventory which can display which >> GPU/GPUG it belong to, right? Anyway, will discuss it after you come back. >> > > As Sylvain mentions below, you will need to have some mechanism in the > XenAPI virt driver which creates child resource providers under the > existing root provider (which is the compute node resource provider). You > will need to have the virt driver persist the mapping between your internal > physical GPU group name and the UUID of the resource provider record that > the virt driver creates for that PGPU group. > AFAICT, we even don't need to persist the mapping. As we only support one GPU type (or group for Xen) in Rocky, you just have to know what was the original type for Stein and then just look at the related resource provider; That's why i wrote an upgrade impact section in my multiple-types spec (see below) saying that in Stein, you need to make sure you only accept one type until the reshape is fully done. -Sylvain > So, for example, let's say you have two PGPU groups on the host. They are > named PGPU_A and PGPU_B. The XenAPI virt driver will need to ask the > ProviderTree object it receives in the update_provider_tree() virt driver > method whether there is a resource provider named "PGPU_A" in the tree. If > not, the virt driver needs to create a new child resource provider with the > name "PGPU_A" with a parent provider pointing to the root compute node > provider. The ProviderTree.new_child() method is used to create new child > providers: > > https://github.com/openstack/nova/blob/82270cc261f6c1d9d2cc3 > 86f1fb445dd66023f75/nova/compute/provider_tree.py#L411 > > Hope that makes sense, > -jay > > Thank very much. >> >> BR. >> >> Naichuan Sun >> >> *From:*Sylvain Bauza [mailto:sbauza at redhat.com] >> *Sent:* Friday, September 14, 2018 9:34 PM >> *To:* OpenStack Development Mailing List (not for usage questions) < >> openstack-dev at lists.openstack.org> >> *Subject:* Re: [openstack-dev] About microversion setting to enable >> nested resource provider >> >> Le jeu. 13 sept. 2018 à 19:29, Naichuan Sun > > a écrit : >> >> Hi, Sylvain, >> >> Thank you very much for the information. It is pity that I can’t >> attend the meeting. >> >> I have a concern about reshaper in multi-type vgpu support. >> >> In the old vgpu support, we only have one vgpu inventory in root >> resource provider, which means we only support one vgpu type. When >> do reshape, placement will send allocations(which include just one >> vgpu resource allocation information) to the driver, if the host >> have more than one pgpu/pgpug(which support different vgpu type), >> how do we know which pgpu/pgpug own the allocation information? Do >> we need to communicate with hypervisor the confirm that? >> >> The reshape will actually move the existing allocations for a VGPU >> resource class to the inventory for this class that is on the child >> resource provider now with the reshape. >> >> Since we agreed on keeping consistent naming, there is no need to guess >> which is which. That said, you raise a point that was discussed during the >> PTG and we all agreed there was an upgrade impact as multiple vGPUs >> shouldn't be allowed until the reshape is done. >> >> Accordingly, see my spec I reproposed for Stein which describes the >> upgrade impact https://review.openstack.org/#/c/602474/ >> >> Since I'm at the PTG, we have huge time difference between you and me, >> but we can discuss on that point next week when I'm back (my mornings match >> then your afternoons) >> >> -Sylvain >> >> Thank you very much. >> >> BR. >> >> Naichuan Sun >> >> *From:*Sylvain Bauza [mailto:sbauza at redhat.com >> ] >> *Sent:* Thursday, September 13, 2018 11:47 PM >> *To:* OpenStack Development Mailing List (not for usage questions) >> > > >> *Subject:* Re: [openstack-dev] About microversion setting to enable >> nested resource provider >> >> Hey Naichuan, >> >> FWIW, we discussed on the missing pieces for nested resource >> providers. See the (currently-in-use) etherpad >> https://etherpad.openstack.org/p/nova-ptg-stein and lookup for >> "closing the gap on nested resource providers" (L144 while I speak) >> >> The fact that we are not able to schedule yet is a critical piece >> that we said we're going to work on it as soon as we can. >> >> -Sylvain >> >> On Thu, Sep 13, 2018 at 9:14 AM, Eric Fried > > wrote: >> >> There's a patch series in progress for this: >> >> https://review.openstack.org/#/q/topic:use-nested-allocation >> -candidates >> >> It needs some TLC. I'm sure gibi and tetsuro would welcome some >> help... >> >> efried >> >> >> On 09/13/2018 08:31 AM, Naichuan Sun wrote: >> > Thank you very much, Jay. >> > Is there somewhere I could set microversion(some configure >> file?), Or just modify the source code to set it? >> > >> > BR. >> > Naichuan Sun >> > >> > -----Original Message----- >> > From: Jay Pipes [mailto:jaypipes at gmail.com >> ] >> > Sent: Thursday, September 13, 2018 9:19 PM >> > To: Naichuan Sun > >; OpenStack Development Mailing >> List (not for usage questions) >> > > >> > Cc: melanie witt > >; efried at us.ibm.com >> ; Sylvain Bauza > > >> >> > Subject: Re: About microversion setting to enable nested >> resource provider >> > >> > On 09/13/2018 06:39 AM, Naichuan Sun wrote: >> >> Hi, guys, >> >> >> >> Looks n-rp is disabled by default because microversion >> matches 1.29 : >> >> >> https://github.com/openstack/nova/blob/master/nova/api/opens >> tack/place >> >> ment/handlers/allocation_candidate.py#L252 >> >> >> >> Anyone know how to set the microversion to enable n-rp in >> placement? >> > >> > It is the client which must send the 1.29+ placement API >> microversion header to indicate to the placement API server that >> the client wants to receive nested provider information in the >> allocation candidates response. >> > >> > Currently, nova-scheduler calls the scheduler reportclient's >> > get_allocation_candidates() method: >> > >> > >> https://github.com/openstack/nova/blob/0ba34a818414823eda5e6 >> 93dc2127a534410b5df/nova/scheduler/manager.py#L138 >> > >> > The scheduler reportclient's get_allocation_candidates() >> method currently passes the 1.25 placement API microversion >> header: >> > >> > >> https://github.com/openstack/nova/blob/0ba34a818414823eda5e6 >> 93dc2127a534410b5df/nova/scheduler/client/report.py#L353 >> > >> > >> https://github.com/openstack/nova/blob/0ba34a818414823eda5e6 >> 93dc2127a534410b5df/nova/scheduler/client/report.py#L53 >> > >> > In order to get the nested information returned in the >> allocation candidates response, that would need to be upped to >> 1.29. >> > >> > Best, >> > -jay >> >> > >> ____________________________________________________________ >> ______________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > subscribe> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-dev >> > >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > subscribe> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Mon Sep 17 13:10:15 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 17 Sep 2018 15:10:15 +0200 Subject: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict In-Reply-To: <662fdad7-ddcd-3c68-d94a-d1b06218087c@gmail.com> References: <1534419109.24276.3@smtp.office365.com> <1534419803.3149.0@smtp.office365.com> <1534500637.29318.1@smtp.office365.com> <7b45da6c-c8d3-c54f-89c0-9798589dfdc4@fried.cc> <1534942527.7552.8@smtp.office365.com> <662fdad7-ddcd-3c68-d94a-d1b06218087c@gmail.com> Message-ID: <1537189815.22188.6@smtp.office365.com> Hi, Reworked and rebased the series based on this thread. The series starts here https://review.openstack.org/#/c/591597 Cheers, gibi From amotoki at gmail.com Mon Sep 17 13:27:04 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 17 Sep 2018 22:27:04 +0900 Subject: [openstack-dev] [neutron] bug deputy report week Sep 10 - 12 Message-ID: Hi, I was the bug deputy for neutron last week. The last week was really quiet, but all of them are gate failures and need attentions. grenade-dvr-multinode job fails https://bugs.launchpad.net/neutron/+bug/1791989 This continuously happens last week with high failure rate. http://grafana.openstack.org/d/Hj5IHcSmz/neutron-failure-rate?panelId=20&fullscreen&orgId=1 neutron_tempest_plugin: test_floatingip_port_details occasionally fails https://bugs.launchpad.net/neutron/+bug/1792472 q-dhcp crashes with guru meditation on ironic's grenade https://bugs.launchpad.net/neutron/+bug/1792925 This was reported this Monday. It continues to occur since at least Sep 10. Best Regards, Akihiro Motoki (irc: amotoki) From fungi at yuggoth.org Mon Sep 17 13:32:11 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 17 Sep 2018 13:32:11 +0000 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: <20180914204756.o5umojwxvypskwti@yuggoth.org> Message-ID: <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> On 2018-09-16 14:14:41 +0200 (+0200), Jean-philippe Evrard wrote: [...] > - What is the problem joining Wechat will solve (keeping in mind the > language barrier)? As I understand it, the suggestion is that mere presence of project leadership in venues where this emerging subset of our community gathers would provide a strong signal that we support them and care about their experience with the software. > - Isn't this problem already solved for other languages with > existing initiatives like local ambassadors and i18n team? Why > aren't these relevant? [...] It seems like there are at least couple of factors at play here: first the significant number of users and contributors within mainland China compared to other regions (analysis suggests there were nearly as many contributors to the Rocky release from China as the USA), but second there may be facets of Chinese culture which make this sort of demonstrative presence a much stronger signal than it would be in other cultures. > - Pardon my ignorance here, what is the problem with email? (I > understand some chat systems might be blocked, I thought emails > would be fine, and the lowest common denominator). Someone in the TC room (forgive me, I don't recall who now, maybe Rico?) asserted that Chinese contributors generally only read the first message in any given thread (perhaps just looking for possible announcements?) and that if they _do_ attempt to read through some of the longer threads they don't participate in them because the discussion is presumed to be over and decisions final by the time they "reach the end" (I guess not realizing that it's perfectly fine to reply to a month-old discussion and try to help alter course on things if you have an actual concern?). > I also have technical questions about 'wechat' (like how do you > use it without a smartphone?) and the relevance of tools we > currently use, but this will open Pandora's box, and I'd rather > not spend my energy on closing that box right now :D Not that I was planning on running it myself, but I did look into the logistics. Apparently there is at least one free/libre open source wechat client under active development but you still need to use a separate mobile device to authenticate your client's connection to wechat's central communication service. By design, it appears this is so that you can't avoid reporting your physical location (it's been suggested this is to comply with government requirements for tracking citizens participating in potentially illegal discussions). They also go to lengths to prevent you from running the required mobile app within an emulator, since that would provide a possible workaround to avoid being tracked. Further, there is some history of backdoors getting included in the software, so you need to use it with the expectation that you're basically handing over all communications and content for which you use that mobile device to wechat developers/service operators and, by proxy, the Chinese government. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jaypipes at gmail.com Mon Sep 17 13:38:19 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 17 Sep 2018 09:38:19 -0400 Subject: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict In-Reply-To: <1537189815.22188.6@smtp.office365.com> References: <1534419109.24276.3@smtp.office365.com> <1534419803.3149.0@smtp.office365.com> <1534500637.29318.1@smtp.office365.com> <7b45da6c-c8d3-c54f-89c0-9798589dfdc4@fried.cc> <1534942527.7552.8@smtp.office365.com> <662fdad7-ddcd-3c68-d94a-d1b06218087c@gmail.com> <1537189815.22188.6@smtp.office365.com> Message-ID: <9ba18ff6-2693-ae33-ab8c-5b6a3c2d9039@gmail.com> Thanks Giblet, Will review this afternoon. Best, -jay On 09/17/2018 09:10 AM, Balázs Gibizer wrote: > > Hi, > > Reworked and rebased the series based on this thread. The series starts > here https://review.openstack.org/#/c/591597 > > Cheers, > gibi > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Mon Sep 17 13:47:57 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 17 Sep 2018 08:47:57 -0500 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> References: <20180914204756.o5umojwxvypskwti@yuggoth.org> <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> Message-ID: <20180917134757.GA8001@sm-workstation> > > > I also have technical questions about 'wechat' (like how do you > > use it without a smartphone?) and the relevance of tools we > > currently use, but this will open Pandora's box, and I'd rather > > not spend my energy on closing that box right now :D > > Not that I was planning on running it myself, but I did look into > the logistics. Apparently there is at least one free/libre open > source wechat client under active development but you still need to > use a separate mobile device to authenticate your client's > connection to wechat's central communication service. By design, it > appears this is so that you can't avoid reporting your physical > location (it's been suggested this is to comply with government > requirements for tracking citizens participating in potentially > illegal discussions). This is correct from my experience. There are one or two desktop clients I have found out there, but there is no way to log in to them other than logging into your phone app, then using that to scan a QR-code like image in order to authenticate. As far as I know, there is no way to use Wechat without installing a smartphone app. From sylvain.bauza at gmail.com Mon Sep 17 13:56:50 2018 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Mon, 17 Sep 2018 15:56:50 +0200 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> References: <20180914204756.o5umojwxvypskwti@yuggoth.org> <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> Message-ID: Le lun. 17 sept. 2018 à 15:32, Jeremy Stanley a écrit : > On 2018-09-16 14:14:41 +0200 (+0200), Jean-philippe Evrard wrote: > [...] > > - What is the problem joining Wechat will solve (keeping in mind the > > language barrier)? > > As I understand it, the suggestion is that mere presence of project > leadership in venues where this emerging subset of our community > gathers would provide a strong signal that we support them and care > about their experience with the software. > > > - Isn't this problem already solved for other languages with > > existing initiatives like local ambassadors and i18n team? Why > > aren't these relevant? > [...] > > It seems like there are at least couple of factors at play here: > first the significant number of users and contributors within > mainland China compared to other regions (analysis suggests there > were nearly as many contributors to the Rocky release from China as > the USA), but second there may be facets of Chinese culture which > make this sort of demonstrative presence a much stronger signal than > it would be in other cultures. > > > - Pardon my ignorance here, what is the problem with email? (I > > understand some chat systems might be blocked, I thought emails > > would be fine, and the lowest common denominator). > > Someone in the TC room (forgive me, I don't recall who now, maybe > Rico?) asserted that Chinese contributors generally only read the > first message in any given thread (perhaps just looking for possible > announcements?) and that if they _do_ attempt to read through some > of the longer threads they don't participate in them because the > discussion is presumed to be over and decisions final by the time > they "reach the end" (I guess not realizing that it's perfectly fine > to reply to a month-old discussion and try to help alter course on > things if you have an actual concern?). > > While I understand the technical issues that could be due using IRC in China, I still don't get why opening the gates and saying WeChat being yet another official channel would prevent our community from fragmenting. Truly the usage of IRC is certainly questionable, but if we have multiple ways to discuss, I just doubt we could prevent us to silo ourselves between our personal usages. Either we consider the new channels as being only for southbound communication, or we envisage the possibility, as a community, to migrate from IRC to elsewhere (I'm particulary not fan of the latter so I would challenge this but I can understand the reasons) -Sylvain > I also have technical questions about 'wechat' (like how do you > > use it without a smartphone?) and the relevance of tools we > > currently use, but this will open Pandora's box, and I'd rather > > not spend my energy on closing that box right now :D > > Not that I was planning on running it myself, but I did look into > the logistics. Apparently there is at least one free/libre open > source wechat client under active development but you still need to > use a separate mobile device to authenticate your client's > connection to wechat's central communication service. By design, it > appears this is so that you can't avoid reporting your physical > location (it's been suggested this is to comply with government > requirements for tracking citizens participating in potentially > illegal discussions). They also go to lengths to prevent you from > running the required mobile app within an emulator, since that would > provide a possible workaround to avoid being tracked. Further, there > is some history of backdoors getting included in the software, so > you need to use it with the expectation that you're basically > handing over all communications and content for which you use that > mobile device to wechat developers/service operators and, by proxy, > the Chinese government. > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Sep 17 15:00:56 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 17 Sep 2018 10:00:56 -0500 Subject: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch Message-ID: <20180917150056.GA10750@sm-workstation> Hello Cinder and Infra teams. Cinder needs some help from infra or some pointers on how to proceed. tl;dr - The openstack/cinder repo had a driverfixes/ocata branch created for fixes that no longer met the more restrictive phase II stable policy criteria. Extended maintenance has changed that and we want to delete driverfixes/ocata to make sure patches are going to the right place. Background ---------- Before the extended maintenance changes, the Cinder team found a lot of vendors were maintaining their own forks to keep backported driver fixes that we were not allowing upstream due to the stable policy being more restrictive for older (or deleted) branches. We created the driverfixes/* branches as a central place for these to go so distros would have one place to grab these fixes, if they chose to do so. This has worked great IMO, and we do occasionally still have things that need to go to driverfixes/mitaka and driverfixes/newton. We had also pushed a lot of fixes to driverfixes/ocata, but with the changes to stable policy with extended maintenance, that is no longer needed. Extended Maintenance Changes ---------------------------- With things being somewhat relaxed with the extended maintenance changes, we are now able to backport bug fixes to stable/ocata that we couldn't before and we don't have to worry as much about that branch being deleted. I had gone through and identified all patches backported to driverfixes/ocata but not stable/ocata and cherry-picked them over to get the two branches in sync. The stable/ocata should now be identical or ahead of driverfixes/ocata and we want to make sure nothing more gets accidentally merged to driverfixes/ocata instead of the official stable branch. Plan ---- We would now like to have the driverfixes/ocata branch deleted so there is no confusion about where backports should go and we don't accidentally get these out of sync again. Infra team, please delete this branch or let me know if there is a process somewhere I should follow to have this removed. Thanks! Sean (smcginnis) From cboylan at sapwetik.org Mon Sep 17 15:36:56 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 17 Sep 2018 08:36:56 -0700 Subject: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch In-Reply-To: <20180917150056.GA10750@sm-workstation> References: <20180917150056.GA10750@sm-workstation> Message-ID: <1537198616.4138246.1510937528.5167ACBD@webmail.messagingengine.com> On Mon, Sep 17, 2018, at 8:00 AM, Sean McGinnis wrote: > Hello Cinder and Infra teams. Cinder needs some help from infra or some > pointers on how to proceed. > > tl;dr - The openstack/cinder repo had a driverfixes/ocata branch created for > fixes that no longer met the more restrictive phase II stable policy criteria. > Extended maintenance has changed that and we want to delete driverfixes/ocata > to make sure patches are going to the right place. > > Background > ---------- > Before the extended maintenance changes, the Cinder team found a lot of vendors > were maintaining their own forks to keep backported driver fixes that we were > not allowing upstream due to the stable policy being more restrictive for older > (or deleted) branches. We created the driverfixes/* branches as a central place > for these to go so distros would have one place to grab these fixes, if they > chose to do so. > > This has worked great IMO, and we do occasionally still have things that need > to go to driverfixes/mitaka and driverfixes/newton. We had also pushed a lot of > fixes to driverfixes/ocata, but with the changes to stable policy with extended > maintenance, that is no longer needed. > > Extended Maintenance Changes > ---------------------------- > With things being somewhat relaxed with the extended maintenance changes, we > are now able to backport bug fixes to stable/ocata that we couldn't before and > we don't have to worry as much about that branch being deleted. > > I had gone through and identified all patches backported to driverfixes/ocata > but not stable/ocata and cherry-picked them over to get the two branches in > sync. The stable/ocata should now be identical or ahead of driverfixes/ocata > and we want to make sure nothing more gets accidentally merged to > driverfixes/ocata instead of the official stable branch. > > Plan > ---- > We would now like to have the driverfixes/ocata branch deleted so there is no > confusion about where backports should go and we don't accidentally get these > out of sync again. > > Infra team, please delete this branch or let me know if there is a process > somewhere I should follow to have this removed. The first step is to make sure that all changes on the branch are in a non open state (merged or abandoned). https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open shows that there are no open changes. Next you will want to make sure that the commits on this branch are preserved somehow. Git garbage collection will delete and cleanup commits if they are not discoverable when working backward from some ref. This is why our old stable branch deletion process required we tag the stable branch as $release-eol first. Looking at `git log origin/driverfixes/ocata ^origin/stable/ocata --no-merges --oneline` there are quite a few commits on the driverfixes branch that are not on the stable branch, but that appears to be due to cherry pick writing new commits. You have indicated above that you believe the two branches are in sync at this point. A quick sampling of commits seems to confirm this as well. If you can go ahead and confirm that you are ready to delete the driverfixes/ocata branch I will go ahead and remove it. Clark From sean.mcginnis at gmx.com Mon Sep 17 15:46:00 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 17 Sep 2018 10:46:00 -0500 Subject: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch In-Reply-To: <1537198616.4138246.1510937528.5167ACBD@webmail.messagingengine.com> References: <20180917150056.GA10750@sm-workstation> <1537198616.4138246.1510937528.5167ACBD@webmail.messagingengine.com> Message-ID: <20180917154559.GA14040@sm-workstation> > > > > Plan > > ---- > > We would now like to have the driverfixes/ocata branch deleted so there is no > > confusion about where backports should go and we don't accidentally get these > > out of sync again. > > > > Infra team, please delete this branch or let me know if there is a process > > somewhere I should follow to have this removed. > > The first step is to make sure that all changes on the branch are in a non open state (merged or abandoned). https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open shows that there are no open changes. > > Next you will want to make sure that the commits on this branch are preserved somehow. Git garbage collection will delete and cleanup commits if they are not discoverable when working backward from some ref. This is why our old stable branch deletion process required we tag the stable branch as $release-eol first. Looking at `git log origin/driverfixes/ocata ^origin/stable/ocata --no-merges --oneline` there are quite a few commits on the driverfixes branch that are not on the stable branch, but that appears to be due to cherry pick writing new commits. You have indicated above that you believe the two branches are in sync at this point. A quick sampling of commits seems to confirm this as well. > > If you can go ahead and confirm that you are ready to delete the driverfixes/ocata branch I will go ahead and remove it. > > Clark > I did another spot check too to make sure I hadn't missed anything, but it does appear to be as you stated that the cherry pick resulted in new commits and they actually are in sync for our purposes. I believe we are ready to proceed. Thanks for your help. Sean From jfrancoa at redhat.com Mon Sep 17 15:50:04 2018 From: jfrancoa at redhat.com (Jose Luis Franco Arza) Date: Mon, 17 Sep 2018 17:50:04 +0200 Subject: [openstack-dev] [TripleO] Regarding dropping Ocata related jobs from TripleO In-Reply-To: References: <8b2cab2b-34c4-705e-5c3a-c310ccb919f1@ericsson.com> Message-ID: Hi, >From the upgrades/updates point of view it should be ok dropping the Ocata jobs. The only one covering Ocata to Pike upgrade in upstream CI is running in RDO cloud as a non voting one, and it is failing at the moment. Thanks, Jose Luis On Fri, Sep 14, 2018 at 6:33 PM Alex Schultz wrote: > On Fri, Sep 14, 2018 at 10:20 AM, Elõd Illés > wrote: > > Hi, > > > > just a comment: Ocata release is not EOL [1][2] rather in Extended > > Maintenance. Do you really want to EOL TripleO stable/ocata? > > > > Yes unless there are any objections. We've already been keeping this > branch alive on life support but CI has started to fail and we've just > been turning it off jobs as they fail. We had not planned on extended > maintenance for Ocata (or Pike). We'll likely consider that starting > with Queens. We could switch it to extended maintenance but without > the promotion jobs we won't have packages to run CI so it would be > better to just EOL it. > > Thanks, > -Alex > > > [1] https://releases.openstack.org/ > > [2] > > > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html > > > > Cheers, > > > > Előd > > > > > > > > On 2018-09-14 09:20, Juan Antonio Osorio Robles wrote: > >> > >> > >> On 09/14/2018 09:01 AM, Alex Schultz wrote: > >>> > >>> On Fri, Sep 14, 2018 at 6:37 AM, Chandan kumar > >>> wrote: > >>>> > >>>> Hello, > >>>> > >>>> As Ocata release is already EOL on 27-08-2018 [1]. > >>>> In TripleO, we are running Ocata jobs in TripleO CI and in promotion > >>>> pipelines. > >>>> Can we drop it all the jobs related to Ocata or do we need to keep > some > >>>> jobs > >>>> to support upgrades in CI? > >>>> > >>> I think unless there are any objections around upgrades, we can drop > >>> the promotion pipelines. It's likely that we'll also want to > >>> officially EOL the tripleo ocata branches. > >> > >> sounds good to me. > >>> > >>> Thanks, > >>> -Alex > >>> > >>>> Links: > >>>> [1.] https://releases.openstack.org/ > >>>> > >>>> Thanks, > >>>> > >>>> Chandan Kumar > >>>> > >>>> > >>>> > __________________________________________________________________________ > >>>> OpenStack Development Mailing List (not for usage questions) > >>>> Unsubscribe: > >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> > >>> > __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Sep 17 15:53:20 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 17 Sep 2018 10:53:20 -0500 Subject: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch In-Reply-To: <20180917154559.GA14040@sm-workstation> References: <20180917150056.GA10750@sm-workstation> <1537198616.4138246.1510937528.5167ACBD@webmail.messagingengine.com> <20180917154559.GA14040@sm-workstation> Message-ID: On 9/17/2018 10:46 AM, Sean McGinnis wrote: >>> Plan >>> ---- >>> We would now like to have the driverfixes/ocata branch deleted so there is no >>> confusion about where backports should go and we don't accidentally get these >>> out of sync again. >>> >>> Infra team, please delete this branch or let me know if there is a process >>> somewhere I should follow to have this removed. >> The first step is to make sure that all changes on the branch are in a non open state (merged or abandoned). https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open shows that there are no open changes. >> >> Next you will want to make sure that the commits on this branch are preserved somehow. Git garbage collection will delete and cleanup commits if they are not discoverable when working backward from some ref. This is why our old stable branch deletion process required we tag the stable branch as $release-eol first. Looking at `git log origin/driverfixes/ocata ^origin/stable/ocata --no-merges --oneline` there are quite a few commits on the driverfixes branch that are not on the stable branch, but that appears to be due to cherry pick writing new commits. You have indicated above that you believe the two branches are in sync at this point. A quick sampling of commits seems to confirm this as well. >> >> If you can go ahead and confirm that you are ready to delete the driverfixes/ocata branch I will go ahead and remove it. >> >> Clark >> > I did another spot check too to make sure I hadn't missed anything, but it does > appear to be as you stated that the cherry pick resulted in new commits and > they actually are in sync for our purposes. > > I believe we are ready to proceed. Sean, Thank you for following up on this.  I agee it is a good idea to remove the old driverfixes/ocata branch to avoid possible confusion in the future. Clark, Sean, myself and the team worked to carefully cherry-pick everything that was needed in stable/ocata so I am confident that we are ready to remove driverfixes/ocata. Thanks! Jay > > Thanks for your help. > > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cboylan at sapwetik.org Mon Sep 17 15:56:59 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 17 Sep 2018 08:56:59 -0700 Subject: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch In-Reply-To: References: <20180917150056.GA10750@sm-workstation> <1537198616.4138246.1510937528.5167ACBD@webmail.messagingengine.com> <20180917154559.GA14040@sm-workstation> Message-ID: <1537199819.4144376.1510972792.3431DFAD@webmail.messagingengine.com> On Mon, Sep 17, 2018, at 8:53 AM, Jay S Bryant wrote: > > > On 9/17/2018 10:46 AM, Sean McGinnis wrote: > >>> Plan > >>> ---- > >>> We would now like to have the driverfixes/ocata branch deleted so there is no > >>> confusion about where backports should go and we don't accidentally get these > >>> out of sync again. > >>> > >>> Infra team, please delete this branch or let me know if there is a process > >>> somewhere I should follow to have this removed. > >> The first step is to make sure that all changes on the branch are in a non open state (merged or abandoned). https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open shows that there are no open changes. > >> > >> Next you will want to make sure that the commits on this branch are preserved somehow. Git garbage collection will delete and cleanup commits if they are not discoverable when working backward from some ref. This is why our old stable branch deletion process required we tag the stable branch as $release-eol first. Looking at `git log origin/driverfixes/ocata ^origin/stable/ocata --no-merges --oneline` there are quite a few commits on the driverfixes branch that are not on the stable branch, but that appears to be due to cherry pick writing new commits. You have indicated above that you believe the two branches are in sync at this point. A quick sampling of commits seems to confirm this as well. > >> > >> If you can go ahead and confirm that you are ready to delete the driverfixes/ocata branch I will go ahead and remove it. > >> > >> Clark > >> > > I did another spot check too to make sure I hadn't missed anything, but it does > > appear to be as you stated that the cherry pick resulted in new commits and > > they actually are in sync for our purposes. > > > > I believe we are ready to proceed. > Sean, > > Thank you for following up on this.  I agee it is a good idea to remove > the old driverfixes/ocata branch to avoid possible confusion in the future. > > Clark, > > Sean, myself and the team worked to carefully cherry-pick everything > that was needed in stable/ocata so I am confident that we are ready to > remove driverfixes/ocata. > I have removed openstack/cinder driverfixes/ocata branch with HEAD a37cc259f197e1a515cf82deb342739a125b65c6. Clark From lbragstad at gmail.com Mon Sep 17 16:06:57 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 17 Sep 2018 10:06:57 -0600 Subject: [openstack-dev] [keystone] Rocky Retrospective Message-ID: This is typically something we do in-person during the PTG, but due to weather and travel approval we didn't have great representation last week. That said, let's try to do an asynchronous retrospective to gather feedback regarding the last cycle. Afterwords we can try and meet to go through specific things, if needed. I've created a doodle to see if we can get a time lined up [0]. The retrospective board [1] is available and waiting for your feedback! The board should be public, but if you need access to add cards, just ping me. I'll collect results from the doodle on Friday and see what times work. Thanks, Lance [0] https://doodle.com/poll/5vkztz9sumkbzp4h [1] https://trello.com/b/af8vmDPs/keystone-rocky-retrospective -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Sep 17 16:13:47 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 17 Sep 2018 11:13:47 -0500 Subject: [openstack-dev] Forum Topic Submission Period Message-ID: <5B9FD2BB.3060806@openstack.org> Hello Everyone! The Forum Topic Submission session started September 12 and will run through September 26th. Now is the time to wrangle the topics you gathered during your Brainstorming Phase and start pushing forum topics through. Don't rely only on a PTL to make the agenda... step on up and place the items you consider important front and center. As you may have noticed on the Forum Wiki (https://wiki.openstack.org/wiki/Forum), we're reusing the normal CFP tool this year. We did our best to remove Summit specific language, but if you notice something, just know that you are submitting to the Forum. URL is here: https://www.openstack.org/summit/berlin-2018/call-for-presentations Looking forward to seeing everyone's submissions! If you have questions or concerns about the process, please don't hesitate to reach out. Cheers, Jimmy -------------- next part -------------- An HTML attachment was scrubbed... URL: From duc.openstack at gmail.com Mon Sep 17 16:20:02 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Mon, 17 Sep 2018 09:20:02 -0700 Subject: [openstack-dev] [senlin] Nominations to Senlin Core Team In-Reply-To: References: Message-ID: Voting has concluded. Welcome Jude and Erik to the Senlin Core team. On Thu, Sep 13, 2018 at 12:14 AM x Lyn wrote: > > +1 to both, looking forward to their future contribution. > > > On Sep 11, 2018, at 12:59 AM, Duc Truong wrote: > > > > Hi Senlin Core Team, > > > > I would like to nominate 2 new core reviewers for Senlin: > > > > [1] Jude Cross (jucross at blizzard.com) > > [2] Erik Olof Gunnar Andersson (eandersson at blizzard.com) > > > > Jude has been doing a number of reviews and contributed some important > > patches to Senlin during the Rocky cycle that resolved locking > > problems. > > > > Erik has the most number of reviews in Rocky and has contributed high > > quality code reviews for some time. > > > > [1] http://stackalytics.com/?module=senlin-group&metric=marks&release=rocky&user_id=jucross at blizzard.com > > [2] http://stackalytics.com/?module=senlin-group&metric=marks&user_id=eandersson&release=rocky > > > > Voting is open for 7 days. Please reply with your +1 vote in favor or > > -1 as a veto vote. > > > > Regards, > > > > Duc (dtruong) > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From duc.openstack at gmail.com Mon Sep 17 16:21:25 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Mon, 17 Sep 2018 09:21:25 -0700 Subject: [openstack-dev] [senlin][stable] Nominating chenyb4 to Senlin Stable Maintainers Team In-Reply-To: <20180914054959.GA5969@rcp.sl.cloud9.ibm.com> References: <20180914054959.GA5969@rcp.sl.cloud9.ibm.com> Message-ID: Voting has concluded. Welcome chenyb4 to the Senlin stable review team. On Thu, Sep 13, 2018 at 10:50 PM Qiming Teng wrote: > > +2 from me. > > Thanks. > - Qiming > > On Mon, Sep 10, 2018 at 09:56:10AM -0700, Duc Truong wrote: > > Hi Senlin Stable Team, > > > > I would like to nominate Yuanbin Chen (chenyb4) to the Senlin stable > > review team. Yuanbin has been doing stable reviews and shown that he > > understands the policy for merging stable patches [1]. > > > > Voting is open for 7 days. Please reply with your +1 vote in favor or > > -1 as a veto vote. > > > > [1] https://review.openstack.org/#/q/branch:%255Estable/.*+reviewedby:cybing4%2540gmail.com > > > > Regards, > > > > Duc (dtruong) > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kgiusti at gmail.com Mon Sep 17 17:18:28 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Mon, 17 Sep 2018 13:18:28 -0400 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: <1536881798-sup-5841@lrrr.local> References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> <1536783914-sup-2738@lrrr.local> <1536881798-sup-5841@lrrr.local> Message-ID: On Thu, Sep 13, 2018 at 7:39 PM Doug Hellmann wrote: > Excerpts from Jim Rollenhagen's message of 2018-09-13 12:08:08 -0600: > > On Wed, Sep 12, 2018 at 2:28 PM, Doug Hellmann > > wrote: > > > > > Excerpts from Doug Hellmann's message of 2018-09-12 12:04:02 -0600: > > > > Excerpts from Clark Boylan's message of 2018-09-12 10:44:55 -0700: > > > > > On Wed, Sep 12, 2018, at 10:23 AM, Jim Rollenhagen wrote: > > > > > > The process of operators upgrading Python versions across their > > > fleet came > > > > > > up this morning. It's fairly obvious that operators will want to > do > > > this in > > > > > > a rolling fashion. > > > > > > > > > > > > Has anyone considered doing this in CI? For example, running > > > multinode > > > > > > grenade with python 2 on one node and python 3 on the other node. > > > > > > > > > > > > Should we (openstack) test this situation, or even care? > > > > > > > > > > > > > > > > This came up in a Vancouver summit session (the python3 one I > think). > > > General consensus there seemed to be that we should have grenade jobs > that > > > run python2 on the old side and python3 on the new side and test the > update > > > from one to another through a release that way. Additionally there was > > > thought that the nova partial job (and similar grenade jobs) could > hold the > > > non upgraded node on python2 and that would talk to a python3 control > plane. > > > > > > > > > > I haven't seen or heard of anyone working on this yet though. > > > > > > > > > > Clark > > > > > > > > > > > > > IIRC, we also talked about not supporting multiple versions of > > > > python on a given node, so all of the services on a node would need > > > > to be upgraded together. > > > > > > > > Doug > > > > > > I spent a little time talking with the QA team about setting up > > > this job, and Attila pointed out that we should think about what > > > exactly we think would break during a 2-to-3 in-place upgrade like > > > this. > > > > > > Keeping in mind that we are still testing initial installation under > > > both versions and upgrades under python 2, do we have any specific > > > concerns about the python *version* causing upgrade issues? > > > > > > > A specific example brought up in the ironic room was the way we encode > > exceptions in oslo.messaging for transmitting over RPC. I know that we've > > found encoding bugs in that in the past, and one can imagine that RPC > > between a service running on py2 and a service running on py3 could have > > similar issues. > > Mixing python 2 and 3 components of the same service across nodes > does seem like an interesting case. I wonder if it's something we > could build a functional test job in oslo.messaging for, though, > without having to test every service separately. I'd be happy if > someone did that. > > Currently that's a hole in the oslo.messaging tests. I've opened a work item to address this in launchpad: https://bugs.launchpad.net/oslo.messaging/+bug/1792977 > > It's definitely edge cases that we'd be catching here (if any), so I'm > > personally fine with assuming it will just work. But I wanted to pose the > > question to the list, as we agreed this isn't only an ironic problem. > > Yes, definitely. I think it's likely to be a bit of work to set up the > jobs and run them for all services, which is why I'm trying to > understand if it's really needed. Thinking through the cases on the list > is a good way to get folks to poke holes in any assertions, so I > appreciate that you started the thread and that everyone is > participating. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Sep 17 17:31:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 17 Sep 2018 11:31:21 -0600 Subject: [openstack-dev] [tc] notes from stein ptg meetings of the technical committee Message-ID: <1537204771-sup-1590@lrrr.local> The TC held meetings on 2 days at the Stein PTG in Denver. On Sunday 9 September, we met for a few hours in the afternoon to discuss introspective topics. On Friday 14 September, we met for a full day to discuss community topics. Below is a summary of my notes. Please let me know if I omitted or misremembered anything. Agenda and notes: https://etherpad.openstack.org/p/tc-stein-ptg Backlog Review ============== We started the Sunday meeting with a review of all of the work that TC members are doing outside of the TC. This gave us a clearer view of what we could expect to be able to do as a group, given time constraints. With that grounding, we reviewed our backlog (https://wiki.openstack.org/wiki/Technical_Committee_Tracker) and removed many items that we either felt were completed, would not be completed, or were ongoing monitoring and did not need to be tracked as specific tasks. We also agreed to try to work together as a group on a smaller number of initiatives, rather than maintaining a long list of open items in the tracker. The python 2.7 deprecation timeline has been documented (https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html) and we saw no need to be more formal about documenting expectations for new projects joining the community, so we removed that item from the tracker. We have discussed building a set of documentation around project constellations, but decided the information we hoped to convey in constellations would better be conveyed through the software section of the foundation web site. Now that the data for that section is being moved to a repository that anyone in the community can update, we no longer felt it was necessary for the TC to commit to writing constellation documentation (although we would support someone else picking up that work). We dropped this item from the tracker. We had completed the item to add a key manager to the base services list when we added "a castellan-compatible" service, but not removed it from the backlog because there was also some discussion about adding Barbican specifically. Work on that is on hold, so we removed the item from the tracker for now and will add it back in the future if we decide to move ahead with adding Barbican. We removed the item for improving the visibility of status updates because we considered it an ongoing task. We will still be working on the problem, but since it may never be done we didn't feel like tracking it as a "task" made sense. Doug, Thierry, and Jeremy talked with the Infra team about upgrading and adding tools later in the week. We removed StarlingX from the tracker because the work on that project are happening out in the open more so we felt the initial status check was sufficiently completed. We removed the item for clarifying the terms of service for hosted projects on openstack.org infrastructure because that work is now being done by the team soon-to-be-called OpenDev. We removed the item that called for a review of the status of the electorate. The election officials produce some statistics after each election and they offered to add any details we thought were useful. Since this is a recurring item managed by the election team, we do not need to track it ourselves. We removed the item for improving the help-wanted list because it wasn't clear that the TC was the best group to manage a list of "roles" or other more detailed information. We discussed placing that information into team documentation or hosting it somewhere outside of the governance repository where more people could contribute. The kubernetes community has set up a forum, for example. We removed the task of updating the PTI for translation and PDF support in documentation builds because we worked out a way to do that without a governance change. http://lists.openstack.org/pipermail/openstack-dev/2018-September/134609.html Joint Leadership Meeting in Berlin ================================== Alan Clark, chair of the Foundation board of directors, was present for the meeting, and asked that we include an agenda item to start planning for the joint leadership meeting with the board, UC, Foundation staff, and representatives from other projects being piloted by the Foundation. We spent a little bit of time discussing the previous meeting, including some feedback from TC members that the project announcements made that morning caused some distraction during the day. Jonathan Bryce indicated that they will time those announcements better to avoid that problem, and that with the new process being developed there should be fewer surprises in the future. Alan also gave us the feedback that with the ongoing evolution of the OpenStack project, the board is going to expect the TC to start providing more strategic planning information. He recommended that we be ready to discuss major themes of ongoing work and give the board members the information they need to project a positive image to the press when they are approached for interviews during the summit. New Project Application Process =============================== We wrapped up Sunday with a discussion of of our process for reviewing new project applications. Zane and Chris in particular felt the process for Adjutant was too painful for the project team because there was no way to know how long discussions might go on and now way for them to anticipate some of the issues they encountered. We talked about formalizing a "coach" position to have someone from the TC (or broader community) work with the team to prepare their application with sufficient detail, seek feedback before voting starts, etc. We also talked about adding a time limit to the process, so that teams at least have a rejection with feedback in a reasonable amount of time. Some of the less contentious discussions have averaged from 1-4 months with a few more contentious cases taking as long as 10 months. We did not settle on a time frame during the meeting, so I expect this to be a topic for us to work out during the next term. Team Health Review ================== We started Friday morning by reviewing the current health tracker process and results. A few TC members expressed doubts about the efficacy of asking teams to report their problems, but we agreed that the process was at least a good way to start building the relationships that would encourage teams to approach us in the future. Someone proposed that we draft a "welcome" email for new PTLs each cycle, to ensure that everyone understood the expectations and knew how to reach their liaisons, if needed. We did also discuss some specific issues uncovered during this term, although they were not the types of issues that led us to start the process in the first place. We reviewed a few of the teams that seemed to be in danger based on their affiliation diversity, review team size, or other reports. We decided that having limited affiliation diversity was not necessarily a problem for projects that were product integration points or deployment tools. We said that we should keep our eye on smaller teams, but not take any action to change their governance status if they were maintaining enough activity to keep up with goals and releases. The keystone team reported some concerns over contributor burnout, mostly caused by their central position in the community and the fact that several core team members have moved on from the project recently so they lost a good bit of institutional knowledge. We talked about ways to set reasonable expectations for folks outside of the team, as well as balancing expectations for folks inside the team who have been trying to address some long-standing technical issues with little traction. We also spent some time talking about approaches for on-boarding contributors, including mentoring. Mentoring presents challenges with investing time on folks who don’t stay with the project, but without the investment at all it is hard to see how teams are going to recruit and retain new members. We also discussed the changing nature of the community and the fact that no longer have a large pool of contributors looking for work - they either have little freedom in what they chose to work on or they have little interest in some of the areas where help may be needed the most. Several teams are addressing review bandwidth by allowing patches to be approved by a single core. Different teams have adopted different policies, ranging from applying the rule only to trivial patches, allowing one core to approve another’s work, or applying the rule to all changes. So far all of the teams experimenting with this approach report that things are working OK, and they have not encountered any major bugs as a result of the change in review policy. A few teams reported issues adding stable branch reviewers from project teams. Sean and Thierry are going to work with the stable maintenance SIG to ensure that teams are able to manage the members of their review teams, with guidance, so we can maintain healthy stable reviews. The Octavia team reported that the goals were not “helpful” to their team. This seems to have been triggered by the WSGI goal definition changing mid-cycle, after the team had nearly completed the earlier definition of the work. We should be able to avoid that problem in the future by doing more preliminary work to ensure the technical details of goals are resolved before adopting the goal. In our retrospective of the process itself we talked about standardizing the set of questions we ask and trying to minimize the amount of time it takes to review each team. I purposefully didn't describe a detailed process this time to see what sorts of things liaisons considered important, and the range was fairly wide. Some liaisons reviewed meeting logs and review statistics, while others simply contacted the PTL to ask a few basic questions. There was strong support for taking the email approach, with a set of common questions so that we have similar information from all teams. Global Outreach =============== We spent a bit of time talking about the need to communicate with parts of the community not active on the mailing list and IRC. The specific example of Chinese users and contributors who primarily use WeChat was raised and we talked about how to encourage those folks to join the rest of the community. Zhipeng Huang has proposed a TC resolution to encourage TC members to use social media channels to communicate with community members who are not active in our regular channels (https://review.openstack.org/#/c/602697/), and there is a mailing list thread (http://lists.openstack.org/pipermail/openstack-dev/2018-September/134684.html). Thierry also recommended attending events in China, if possible, because it offers a new perspective on that part of the community, and several of the TC members who have been able to do that agreed. Technical Vision ================ Zane presented his draft technical vision (https://review.openstack.org/592205) and we talked about gaps, such as the tension between services relying on integrating with each other versus running in "standalone" configurations and reusing components created by other communities. We also need to explain what the vision is for, and how it will be used by the TC and project teams. When the next draft is ready, we will publicize it more and ask for feedback from all of the project teams with the goal of having it ready to present at the joint leadership meeting at the upcoming summit in Berlin. SIGs, Working Groups, and Cross-project efforts =============================================== We closed out the afternoon with a request from the public cloud working group to help them with the feature requests they have identified, many of which would require work from multiple project teams. We talked about applying the community goal process to features like deleting a tenant from a cloud, and we talked about the idea that some other features may require multiple cycles of work, and therefore either multiple goals or some other approach. As a next step, we encouraged the SIG to add their suggestions to the community goals list (https://etherpad.openstack.org/p/community-goals) and to schedule forum sessions to talk about specific features, in addition to prioritization sessions for the SIG to review its list. We also talked about ways to find contributors willing and able to work on the ideas coming from SIGs and working groups. The original intent of setting up SIGs was to provide a way for people with common interests to work together. If contributors are not participating, that may indicate a lack of economic incentive or simply a lack of advertising of the SIG. Either way it makes recruiting folks to drive the work more challenging. From doug at doughellmann.com Mon Sep 17 17:39:07 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 17 Sep 2018 11:39:07 -0600 Subject: [openstack-dev] [goals][python3] mixed versions? In-Reply-To: References: <1536774295.1277719.1505890888.38E3EBF6@webmail.messagingengine.com> <1536775296-sup-6148@lrrr.local> <1536783914-sup-2738@lrrr.local> <1536881798-sup-5841@lrrr.local> Message-ID: <1537205866-sup-6133@lrrr.local> Excerpts from Ken Giusti's message of 2018-09-17 13:18:28 -0400: > On Thu, Sep 13, 2018 at 7:39 PM Doug Hellmann wrote: > > Excerpts from Jim Rollenhagen's message of 2018-09-13 12:08:08 -0600: [snip] > > > A specific example brought up in the ironic room was the way we encode > > > exceptions in oslo.messaging for transmitting over RPC. I know that we've > > > found encoding bugs in that in the past, and one can imagine that RPC > > > between a service running on py2 and a service running on py3 could have > > > similar issues. > > > > Mixing python 2 and 3 components of the same service across nodes > > does seem like an interesting case. I wonder if it's something we > > could build a functional test job in oslo.messaging for, though, > > without having to test every service separately. I'd be happy if > > someone did that. > > > > > Currently that's a hole in the oslo.messaging tests. I've opened a work > item to address this in launchpad: > https://bugs.launchpad.net/oslo.messaging/+bug/1792977 Thanks, Ken! Doug From jaypipes at gmail.com Mon Sep 17 19:06:40 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 17 Sep 2018 15:06:40 -0400 Subject: [openstack-dev] [tc] notes from stein ptg meetings of the technical committee In-Reply-To: <1537204771-sup-1590@lrrr.local> References: <1537204771-sup-1590@lrrr.local> Message-ID: <5030a1e4-23cd-3cc9-1f89-e895efc7eb5b@gmail.com> On 09/17/2018 01:31 PM, Doug Hellmann wrote: > New Project Application Process > =============================== > > We wrapped up Sunday with a discussion of of our process for reviewing > new project applications. Zane and Chris in particular felt the > process for Adjutant was too painful for the project team because > there was no way to know how long discussions might go on and now > way for them to anticipate some of the issues they encountered. > > We talked about formalizing a "coach" position to have someone from > the TC (or broader community) work with the team to prepare their > application with sufficient detail, seek feedback before voting > starts, etc. > > We also talked about adding a time limit to the process, so that > teams at least have a rejection with feedback in a reasonable amount > of time. Some of the less contentious discussions have averaged > from 1-4 months with a few more contentious cases taking as long > as 10 months. We did not settle on a time frame during the meeting, > so I expect this to be a topic for us to work out during the next > term. So, to summarize... the TC is back to almost exactly the same point it was at right before the Project Structure Reform happened in 2014-2015 (that whole Big Tent thing). The Project Structure Reform occurred because the TC could not make decisions on whether projects should join OpenStack using objective criteria, and due to this, new project applicants were forced to endure long waits and subjective "graduation" reviews that could change from one TC election cycle to the next. The solution to this was to make an objective set of application criteria and remove the TC from the "Supreme Court of OpenStack" role that new applicants needed to come before and submit to the court's judgment. Many people complained that the Project Structure Reform was the TC simply abrogating responsibility for being a judgmental body. It seems that although we've now gotten rid of those objective criteria for project inclusion and gone back to the TC being a subjective judgmental body, that the TC is still not actually willing to pass judgment one way or the other on new project applicants. Is this because it is still remarkably unclear what OpenStack actually *is* (the whole mission/scope thing)? Or is this because TC members simply don't want to be the ones to say "No" to good-meaning people that may have an idea that is only tangentially related to cloud computing? Everything old is new again. Best, -jay From samuel at cassi.ba Mon Sep 17 19:27:49 2018 From: samuel at cassi.ba (Samuel Cassiba) Date: Mon, 17 Sep 2018 12:27:49 -0700 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: <20180914204756.o5umojwxvypskwti@yuggoth.org> <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> Message-ID: On Mon, Sep 17, 2018 at 6:58 AM Sylvain Bauza wrote: > > > > Le lun. 17 sept. 2018 à 15:32, Jeremy Stanley a écrit : >> >> On 2018-09-16 14:14:41 +0200 (+0200), Jean-philippe Evrard wrote: >> [...] >> > - What is the problem joining Wechat will solve (keeping in mind the >> > language barrier)? >> >> As I understand it, the suggestion is that mere presence of project >> leadership in venues where this emerging subset of our community >> gathers would provide a strong signal that we support them and care >> about their experience with the software. >> >> > - Isn't this problem already solved for other languages with >> > existing initiatives like local ambassadors and i18n team? Why >> > aren't these relevant? >> [...] >> >> It seems like there are at least couple of factors at play here: >> first the significant number of users and contributors within >> mainland China compared to other regions (analysis suggests there >> were nearly as many contributors to the Rocky release from China as >> the USA), but second there may be facets of Chinese culture which >> make this sort of demonstrative presence a much stronger signal than >> it would be in other cultures. >> >> > - Pardon my ignorance here, what is the problem with email? (I >> > understand some chat systems might be blocked, I thought emails >> > would be fine, and the lowest common denominator). >> >> Someone in the TC room (forgive me, I don't recall who now, maybe >> Rico?) asserted that Chinese contributors generally only read the >> first message in any given thread (perhaps just looking for possible >> announcements?) and that if they _do_ attempt to read through some >> of the longer threads they don't participate in them because the >> discussion is presumed to be over and decisions final by the time >> they "reach the end" (I guess not realizing that it's perfectly fine >> to reply to a month-old discussion and try to help alter course on >> things if you have an actual concern?). >> > > While I understand the technical issues that could be due using IRC in China, I still don't get why opening the gates and saying WeChat being yet another official channel would prevent our community from fragmenting. > > Truly the usage of IRC is certainly questionable, but if we have multiple ways to discuss, I just doubt we could prevent us to silo ourselves between our personal usages. > Either we consider the new channels as being only for southbound communication, or we envisage the possibility, as a community, to migrate from IRC to elsewhere (I'm particulary not fan of the latter so I would challenge this but I can understand the reasons) > > -Sylvain > Objectively, I don't see a way to endorse something other than IRC without some form of collective presence on more than just Wechat to keep the message intact. IRC is the official messaging platform, for whatever that's worth these days. However, at present, it makes less and less sense to explicitly eschew other outlets in favor. From a Chef OpenStack perspective, the common medium is, perhaps not unsurprising, code review. Everything else evolved over time to be southbound paths to the code, including most of the conversation taking place there as opposed to IRC. The continuation of this thread only confirms that there is already fragmentation in the community, and that people on each side of the void genuinely want to close that gap. At this point, the thing to do is prevent further fragmentation of the intent. It is, however, far easier to bikeshed over which platform of choice. At present, it seems a collective presence is forming ad hoc, regardless of any such resolution. With some additional coordination and planning, I think that there could be something that could scale beyond one or two outlets. Best, Samuel From mriedemos at gmail.com Mon Sep 17 19:28:12 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 17 Sep 2018 14:28:12 -0500 Subject: [openstack-dev] [nova] When can/should we change additionalProperties=False in GET /servers(/detail)? Message-ID: <70abbabe-2480-4c25-0665-a14b2eb5f3ab@gmail.com> This is a question from a change [1] which adds a new changes-before filter to the servers, os-instance-actions and os-migrations APIs. For context, the os-instance-actions API stopped accepting undefined query parameters in 2.58 when we added paging support. The os-migrations API stopped allowing undefined query parameters in 2.59 when we added paging support. The open question on the review is if we should change GET /servers and GET /servers/detail to stop allowing undefined query parameters starting with microversion 2.66 [2]. Apparently when we added support for 2.5 and 2.26 for listing servers we didn't think about this. It means that a user can specify a query parameter, documented in the API reference, but with an older microversion and it will be silently ignored. That is backward compatible but confusing from an end user perspective since it would appear to them that the filter is not being applied, when it fact it would be if they used the correct microversion. So do we want to start enforcing query parameters when listing servers to our defined list with microversion 2.66 or just continue to silently ignore them if used incorrectly? Note that starting in Rocky, the Neutron API will start rejecting unknown query parameteres [3] if the filter-validation extension is enabled (since Neutron doesn't use microversions). So there is some precedent in OpenStack for starting to enforce query parameters. [1] https://review.openstack.org/#/c/599276/ [2] https://review.openstack.org/#/c/599276/23/nova/api/openstack/compute/schemas/servers.py [3] https://docs.openstack.org/releasenotes/neutron/rocky.html#upgrade-notes -- Thanks, Matt From mnaser at vexxhost.com Mon Sep 17 19:42:20 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 17 Sep 2018 15:42:20 -0400 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: <20180914204756.o5umojwxvypskwti@yuggoth.org> <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> Message-ID: Hi, On that note, is there any way to get an 'invite' onto those channels? Any information about the foundation side of things about the 'official' channels? Thanks, Mohammed On Mon, Sep 17, 2018 at 3:28 PM Samuel Cassiba wrote: > > On Mon, Sep 17, 2018 at 6:58 AM Sylvain Bauza wrote: > > > > > > > > Le lun. 17 sept. 2018 à 15:32, Jeremy Stanley a écrit : > >> > >> On 2018-09-16 14:14:41 +0200 (+0200), Jean-philippe Evrard wrote: > >> [...] > >> > - What is the problem joining Wechat will solve (keeping in mind the > >> > language barrier)? > >> > >> As I understand it, the suggestion is that mere presence of project > >> leadership in venues where this emerging subset of our community > >> gathers would provide a strong signal that we support them and care > >> about their experience with the software. > >> > >> > - Isn't this problem already solved for other languages with > >> > existing initiatives like local ambassadors and i18n team? Why > >> > aren't these relevant? > >> [...] > >> > >> It seems like there are at least couple of factors at play here: > >> first the significant number of users and contributors within > >> mainland China compared to other regions (analysis suggests there > >> were nearly as many contributors to the Rocky release from China as > >> the USA), but second there may be facets of Chinese culture which > >> make this sort of demonstrative presence a much stronger signal than > >> it would be in other cultures. > >> > >> > - Pardon my ignorance here, what is the problem with email? (I > >> > understand some chat systems might be blocked, I thought emails > >> > would be fine, and the lowest common denominator). > >> > >> Someone in the TC room (forgive me, I don't recall who now, maybe > >> Rico?) asserted that Chinese contributors generally only read the > >> first message in any given thread (perhaps just looking for possible > >> announcements?) and that if they _do_ attempt to read through some > >> of the longer threads they don't participate in them because the > >> discussion is presumed to be over and decisions final by the time > >> they "reach the end" (I guess not realizing that it's perfectly fine > >> to reply to a month-old discussion and try to help alter course on > >> things if you have an actual concern?). > >> > > > > While I understand the technical issues that could be due using IRC in China, I still don't get why opening the gates and saying WeChat being yet another official channel would prevent our community from fragmenting. > > > > Truly the usage of IRC is certainly questionable, but if we have multiple ways to discuss, I just doubt we could prevent us to silo ourselves between our personal usages. > > Either we consider the new channels as being only for southbound communication, or we envisage the possibility, as a community, to migrate from IRC to elsewhere (I'm particulary not fan of the latter so I would challenge this but I can understand the reasons) > > > > -Sylvain > > > > Objectively, I don't see a way to endorse something other than IRC > without some form of collective presence on more than just Wechat to > keep the message intact. IRC is the official messaging platform, for > whatever that's worth these days. However, at present, it makes less > and less sense to explicitly eschew other outlets in favor. From a > Chef OpenStack perspective, the common medium is, perhaps not > unsurprising, code review. Everything else evolved over time to be > southbound paths to the code, including most of the conversation > taking place there as opposed to IRC. > > The continuation of this thread only confirms that there is already > fragmentation in the community, and that people on each side of the > void genuinely want to close that gap. At this point, the thing to do > is prevent further fragmentation of the intent. It is, however, far > easier to bikeshed over which platform of choice. > > At present, it seems a collective presence is forming ad hoc, > regardless of any such resolution. With some additional coordination > and planning, I think that there could be something that could scale > beyond one or two outlets. > > Best, > Samuel > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From jaypipes at gmail.com Mon Sep 17 20:06:43 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 17 Sep 2018 16:06:43 -0400 Subject: [openstack-dev] [nova] When can/should we change additionalProperties=False in GET /servers(/detail)? In-Reply-To: <70abbabe-2480-4c25-0665-a14b2eb5f3ab@gmail.com> References: <70abbabe-2480-4c25-0665-a14b2eb5f3ab@gmail.com> Message-ID: <75ef2549-dfba-3267-5e76-0c59c64cd4ac@gmail.com> On 09/17/2018 03:28 PM, Matt Riedemann wrote: > This is a question from a change [1] which adds a new changes-before > filter to the servers, os-instance-actions and os-migrations APIs. > > For context, the os-instance-actions API stopped accepting undefined > query parameters in 2.58 when we added paging support. > > The os-migrations API stopped allowing undefined query parameters in > 2.59 when we added paging support. > > The open question on the review is if we should change GET /servers and > GET /servers/detail to stop allowing undefined query parameters starting > with microversion 2.66 [2]. Apparently when we added support for 2.5 and > 2.26 for listing servers we didn't think about this. It means that a > user can specify a query parameter, documented in the API reference, but > with an older microversion and it will be silently ignored. That is > backward compatible but confusing from an end user perspective since it > would appear to them that the filter is not being applied, when it fact > it would be if they used the correct microversion. > > So do we want to start enforcing query parameters when listing servers > to our defined list with microversion 2.66 or just continue to silently > ignore them if used incorrectly? > > Note that starting in Rocky, the Neutron API will start rejecting > unknown query parameteres [3] if the filter-validation extension is > enabled (since Neutron doesn't use microversions). So there is some > precedent in OpenStack for starting to enforce query parameters. > > [1] https://review.openstack.org/#/c/599276/ > [2] > https://review.openstack.org/#/c/599276/23/nova/api/openstack/compute/schemas/servers.py > > [3] > https://docs.openstack.org/releasenotes/neutron/rocky.html#upgrade-notes My vote would be just change additionalProperties to False in the 599276 patch and be done with it. Add a release note about the change, of course. -jay From zbitter at redhat.com Mon Sep 17 20:12:30 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 17 Sep 2018 16:12:30 -0400 Subject: [openstack-dev] [tc] notes from stein ptg meetings of the technical committee In-Reply-To: <5030a1e4-23cd-3cc9-1f89-e895efc7eb5b@gmail.com> References: <1537204771-sup-1590@lrrr.local> <5030a1e4-23cd-3cc9-1f89-e895efc7eb5b@gmail.com> Message-ID: <59f39dda-5113-7e8d-7402-8b1711d25f66@redhat.com> On 17/09/18 3:06 PM, Jay Pipes wrote: > On 09/17/2018 01:31 PM, Doug Hellmann wrote: >> New Project Application Process >> =============================== >> >> We wrapped up Sunday with a discussion of of our process for reviewing >> new project applications. Zane and Chris in particular felt the >> process for Adjutant was too painful for the project team because >> there was no way to know how long discussions might go on and now >> way for them to anticipate some of the issues they encountered. >> >> We talked about formalizing a "coach" position to have someone from >> the TC (or broader community) work with the team to prepare their >> application with sufficient detail, seek feedback before voting >> starts, etc. >> >> We also talked about adding a time limit to the process, so that >> teams at least have a rejection with feedback in a reasonable amount >> of time.  Some of the less contentious discussions have averaged >> from 1-4 months with a few more contentious cases taking as long >> as 10 months. We did not settle on a time frame during the meeting, >> so I expect this to be a topic for us to work out during the next >> term. > > So, to summarize... the TC is back to almost exactly the same point it > was at right before the Project Structure Reform happened in 2014-2015 > (that whole Big Tent thing). I wouldn't go that far. There are more easy decisions than there were before the reform, but there still exist hard decisions. This is perhaps inevitable. > The Project Structure Reform occurred because the TC could not make > decisions on whether projects should join OpenStack using objective > criteria, and due to this, new project applicants were forced to endure > long waits and subjective "graduation" reviews that could change from > one TC election cycle to the next. > > The solution to this was to make an objective set of application > criteria and remove the TC from the "Supreme Court of OpenStack" role > that new applicants needed to come before and submit to the court's > judgment. > > Many people complained that the Project Structure Reform was the TC > simply abrogating responsibility for being a judgmental body. > > It seems that although we've now gotten rid of those objective criteria > for project inclusion and gone back to the TC being a subjective > judgmental body, that the TC is still not actually willing to pass > judgment one way or the other on new project applicants. No criteria have been gotten rid of, but even after the Project Structure Reform there existed criteria that were subjective. Here is a thread discussing them during the last TC election: http://lists.openstack.org/pipermail/openstack-dev/2018-April/129622.html (I actually think that the perception that the criteria should be entirely objective might be a contributor to the problem: when faced with a subjective decision and no documentation or precedent to guide them, TC members can be reluctant to choose.) > Is this because it is still remarkably unclear what OpenStack actually > *is* (the whole mission/scope thing)? > > Or is this because TC members simply don't want to be the ones to say > "No" to good-meaning people I suspect both of those reasons are probably in the mix, along with a few others as well. > that may have an idea that is only > tangentially related to cloud computing? It should be noted that in this case Adjutant pretty clearly fills an essential use case for public clouds. The debate was around whether accepting it was likely to lead to the desired standardisation across public OpenStack clouds or effectively act as an official endorsement for API fragmentation. It's not clear that any change to the criteria could have made this particular decision any easier. Things did seem to go more smoothly after we nominated a couple of people to work directly with the project to polish their application, and in retrospect we probably should have treated it with more urgency rather than e.g. waiting for a face-to-face discussion at the Forum before attempting to make progress. Those are the lessons behind the process improvements that we discussed last week that Doug summarised above. cheers, Zane. From doug at doughellmann.com Mon Sep 17 20:50:03 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 17 Sep 2018 14:50:03 -0600 Subject: [openstack-dev] [tc] notes from stein ptg meetings of the technical committee In-Reply-To: <59f39dda-5113-7e8d-7402-8b1711d25f66@redhat.com> References: <1537204771-sup-1590@lrrr.local> <5030a1e4-23cd-3cc9-1f89-e895efc7eb5b@gmail.com> <59f39dda-5113-7e8d-7402-8b1711d25f66@redhat.com> Message-ID: <1537216461-sup-8994@lrrr.local> Excerpts from Zane Bitter's message of 2018-09-17 16:12:30 -0400: > On 17/09/18 3:06 PM, Jay Pipes wrote: > > On 09/17/2018 01:31 PM, Doug Hellmann wrote: > >> New Project Application Process > >> =============================== > >> > >> We wrapped up Sunday with a discussion of of our process for reviewing > >> new project applications. Zane and Chris in particular felt the > >> process for Adjutant was too painful for the project team because > >> there was no way to know how long discussions might go on and now > >> way for them to anticipate some of the issues they encountered. > >> > >> We talked about formalizing a "coach" position to have someone from > >> the TC (or broader community) work with the team to prepare their > >> application with sufficient detail, seek feedback before voting > >> starts, etc. > >> > >> We also talked about adding a time limit to the process, so that > >> teams at least have a rejection with feedback in a reasonable amount > >> of time.  Some of the less contentious discussions have averaged > >> from 1-4 months with a few more contentious cases taking as long > >> as 10 months. We did not settle on a time frame during the meeting, > >> so I expect this to be a topic for us to work out during the next > >> term. > > > > So, to summarize... the TC is back to almost exactly the same point it > > was at right before the Project Structure Reform happened in 2014-2015 > > (that whole Big Tent thing). > > I wouldn't go that far. There are more easy decisions than there were > before the reform, but there still exist hard decisions. This is perhaps > inevitable. > > > The Project Structure Reform occurred because the TC could not make > > decisions on whether projects should join OpenStack using objective > > criteria, and due to this, new project applicants were forced to endure > > long waits and subjective "graduation" reviews that could change from > > one TC election cycle to the next. > > > > The solution to this was to make an objective set of application > > criteria and remove the TC from the "Supreme Court of OpenStack" role > > that new applicants needed to come before and submit to the court's > > judgment. > > > > Many people complained that the Project Structure Reform was the TC > > simply abrogating responsibility for being a judgmental body. > > > > It seems that although we've now gotten rid of those objective criteria > > for project inclusion and gone back to the TC being a subjective > > judgmental body, that the TC is still not actually willing to pass > > judgment one way or the other on new project applicants. > > No criteria have been gotten rid of, but even after the Project > Structure Reform there existed criteria that were subjective. Here is a > thread discussing them during the last TC election: > > http://lists.openstack.org/pipermail/openstack-dev/2018-April/129622.html > > (I actually think that the perception that the criteria should be > entirely objective might be a contributor to the problem: when faced > with a subjective decision and no documentation or precedent to guide > them, TC members can be reluctant to choose.) I think turning the decision about which projects fit the mission into an entirely mechanical one would be a mistake. I would prefer us to use, and trust, our judgement in cases where the answer needs some thought. I don't remember the history quite the way Jay does, either. I remember us trying to base the decision more about what the team was doing than how the code looked or whether the implementation met anyone's idea of "good". That's why we retained the requirement that the project "aligns with the OpenStack Mission". > > > Is this because it is still remarkably unclear what OpenStack actually > > *is* (the whole mission/scope thing)? > > > > Or is this because TC members simply don't want to be the ones to say > > "No" to good-meaning people > > I suspect both of those reasons are probably in the mix, along with a > few others as well. There was a good deal of confusion early on about what "workflow" meant in the context of Adjutant and whether the use of workflows was overlapping unnecessarily with Mistral. After that was clarified we talked about the interoperability concerns with an API that may be different based on deployer choices. > > > that may have an idea that is only > > tangentially related to cloud computing? > > It should be noted that in this case Adjutant pretty clearly fills an > essential use case for public clouds. The debate was around whether > accepting it was likely to lead to the desired standardisation across > public OpenStack clouds or effectively act as an official endorsement > for API fragmentation. > > It's not clear that any change to the criteria could have made this > particular decision any easier. Only adding a specific rule about the API interoperability would have addressed my concern directly. I'm not sure applying such a rule will always make sense (Thierry and Colleen very nearly convinced me that it would be OK for the Adjutant API on different clouds to be different, and there may be other cases where the argument is stronger). > Things did seem to go more smoothly after we nominated a couple of > people to work directly with the project to polish their application, > and in retrospect we probably should have treated it with more urgency > rather than e.g. waiting for a face-to-face discussion at the Forum > before attempting to make progress. Those are the lessons behind the > process improvements that we discussed last week that Doug summarised above. Right. I think both of those discussions dragged on because we didn't have anyone assigned to drive them and because it took us a while to clearly communicate the concerns from the TC and the answers from the Adjutant team. I don't have any issue with any of the questions that were raised during the review, just with the length of time it took us. Identifying coaches to help project teams through the process, and setting deadlines (similar to review deadlines for patches) should help us with that. Doug From jaypipes at gmail.com Mon Sep 17 21:07:43 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 17 Sep 2018 17:07:43 -0400 Subject: [openstack-dev] [tc] notes from stein ptg meetings of the technical committee In-Reply-To: <1537216461-sup-8994@lrrr.local> References: <1537204771-sup-1590@lrrr.local> <5030a1e4-23cd-3cc9-1f89-e895efc7eb5b@gmail.com> <59f39dda-5113-7e8d-7402-8b1711d25f66@redhat.com> <1537216461-sup-8994@lrrr.local> Message-ID: <019dad0e-8631-f6fe-e786-577f79f8edc6@gmail.com> On 09/17/2018 04:50 PM, Doug Hellmann wrote: > Excerpts from Zane Bitter's message of 2018-09-17 16:12:30 -0400: >> On 17/09/18 3:06 PM, Jay Pipes wrote: >>> On 09/17/2018 01:31 PM, Doug Hellmann wrote: >>>> New Project Application Process >>>> =============================== >>>> >>>> We wrapped up Sunday with a discussion of of our process for reviewing >>>> new project applications. Zane and Chris in particular felt the >>>> process for Adjutant was too painful for the project team because >>>> there was no way to know how long discussions might go on and now >>>> way for them to anticipate some of the issues they encountered. >>>> >>>> We talked about formalizing a "coach" position to have someone from >>>> the TC (or broader community) work with the team to prepare their >>>> application with sufficient detail, seek feedback before voting >>>> starts, etc. >>>> >>>> We also talked about adding a time limit to the process, so that >>>> teams at least have a rejection with feedback in a reasonable amount >>>> of time.  Some of the less contentious discussions have averaged >>>> from 1-4 months with a few more contentious cases taking as long >>>> as 10 months. We did not settle on a time frame during the meeting, >>>> so I expect this to be a topic for us to work out during the next >>>> term. >>> >>> So, to summarize... the TC is back to almost exactly the same point it >>> was at right before the Project Structure Reform happened in 2014-2015 >>> (that whole Big Tent thing). >> >> I wouldn't go that far. There are more easy decisions than there were >> before the reform, but there still exist hard decisions. This is perhaps >> inevitable. >> >>> The Project Structure Reform occurred because the TC could not make >>> decisions on whether projects should join OpenStack using objective >>> criteria, and due to this, new project applicants were forced to endure >>> long waits and subjective "graduation" reviews that could change from >>> one TC election cycle to the next. >>> >>> The solution to this was to make an objective set of application >>> criteria and remove the TC from the "Supreme Court of OpenStack" role >>> that new applicants needed to come before and submit to the court's >>> judgment. >>> >>> Many people complained that the Project Structure Reform was the TC >>> simply abrogating responsibility for being a judgmental body. >>> >>> It seems that although we've now gotten rid of those objective criteria >>> for project inclusion and gone back to the TC being a subjective >>> judgmental body, that the TC is still not actually willing to pass >>> judgment one way or the other on new project applicants. >> >> No criteria have been gotten rid of, but even after the Project >> Structure Reform there existed criteria that were subjective. Here is a >> thread discussing them during the last TC election: >> >> http://lists.openstack.org/pipermail/openstack-dev/2018-April/129622.html >> >> (I actually think that the perception that the criteria should be >> entirely objective might be a contributor to the problem: when faced >> with a subjective decision and no documentation or precedent to guide >> them, TC members can be reluctant to choose.) > > I think turning the decision about which projects fit the mission > into an entirely mechanical one would be a mistake. I would prefer > us to use, and trust, our judgement in cases where the answer needs > some thought. > > I don't remember the history quite the way Jay does, either. I > remember us trying to base the decision more about what the team > was doing than how the code looked or whether the implementation > met anyone's idea of "good". That's why we retained the requirement > that the project "aligns with the OpenStack Mission". Hmm. I very specifically remember the incubation and graduation review of Zaqar and the fact that over a couple cycles of TC elections, the "advice" given by the TC about specific technical implementation details changed, often arbitrarily, depending on who was on the TC and what day of the week it was. In fact, I pretty vividly remember this arbitrary nature of the architectural review being one of the primary reasons we switched to a purely objective set of criteria. Also, for the record, I actually wasn't referring to Adjutant specifically when I referred in my original post to "only tangentially related to cloud computing". I was referring to my recollection of fairly recent history. I remember the seemingly endless debates about whether some applicants "fit" the OpenStack ecosystem or whether the applicant was merely trying to jump on a hype bandwagon for marketing purposes. Again, I wasn't specifically referring to Adjutant here, so I apologize if my words came across that way. Best, -jay From lbragstad at gmail.com Mon Sep 17 21:56:44 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 17 Sep 2018 15:56:44 -0600 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: <20180914204756.o5umojwxvypskwti@yuggoth.org> <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> Message-ID: On Mon, Sep 17, 2018 at 1:42 PM Mohammed Naser wrote: > Hi, > > On that note, is there any way to get an 'invite' onto those channels? > > Any information about the foundation side of things about the > 'official' channels? > I actually have a question about this as well. During the TC discussion last Friday there was representation from the Foundation in the room. I though I remember someone (annabelleB?) saying there were known issues (technical or otherwise) regarding the official channels spun up by the Foundation. Does anyone know what issues were being referred to here? > > Thanks, > Mohammed > On Mon, Sep 17, 2018 at 3:28 PM Samuel Cassiba wrote: > > > > On Mon, Sep 17, 2018 at 6:58 AM Sylvain Bauza > wrote: > > > > > > > > > > > > Le lun. 17 sept. 2018 à 15:32, Jeremy Stanley a > écrit : > > >> > > >> On 2018-09-16 14:14:41 +0200 (+0200), Jean-philippe Evrard wrote: > > >> [...] > > >> > - What is the problem joining Wechat will solve (keeping in mind the > > >> > language barrier)? > > >> > > >> As I understand it, the suggestion is that mere presence of project > > >> leadership in venues where this emerging subset of our community > > >> gathers would provide a strong signal that we support them and care > > >> about their experience with the software. > > >> > > >> > - Isn't this problem already solved for other languages with > > >> > existing initiatives like local ambassadors and i18n team? Why > > >> > aren't these relevant? > > >> [...] > > >> > > >> It seems like there are at least couple of factors at play here: > > >> first the significant number of users and contributors within > > >> mainland China compared to other regions (analysis suggests there > > >> were nearly as many contributors to the Rocky release from China as > > >> the USA), but second there may be facets of Chinese culture which > > >> make this sort of demonstrative presence a much stronger signal than > > >> it would be in other cultures. > > >> > > >> > - Pardon my ignorance here, what is the problem with email? (I > > >> > understand some chat systems might be blocked, I thought emails > > >> > would be fine, and the lowest common denominator). > > >> > > >> Someone in the TC room (forgive me, I don't recall who now, maybe > > >> Rico?) asserted that Chinese contributors generally only read the > > >> first message in any given thread (perhaps just looking for possible > > >> announcements?) and that if they _do_ attempt to read through some > > >> of the longer threads they don't participate in them because the > > >> discussion is presumed to be over and decisions final by the time > > >> they "reach the end" (I guess not realizing that it's perfectly fine > > >> to reply to a month-old discussion and try to help alter course on > > >> things if you have an actual concern?). > > >> > > > > > > While I understand the technical issues that could be due using IRC in > China, I still don't get why opening the gates and saying WeChat being yet > another official channel would prevent our community from fragmenting. > > > > > > Truly the usage of IRC is certainly questionable, but if we have > multiple ways to discuss, I just doubt we could prevent us to silo > ourselves between our personal usages. > > > Either we consider the new channels as being only for southbound > communication, or we envisage the possibility, as a community, to migrate > from IRC to elsewhere (I'm particulary not fan of the latter so I would > challenge this but I can understand the reasons) > > > > > > -Sylvain > > > > > > > Objectively, I don't see a way to endorse something other than IRC > > without some form of collective presence on more than just Wechat to > > keep the message intact. IRC is the official messaging platform, for > > whatever that's worth these days. However, at present, it makes less > > and less sense to explicitly eschew other outlets in favor. From a > > Chef OpenStack perspective, the common medium is, perhaps not > > unsurprising, code review. Everything else evolved over time to be > > southbound paths to the code, including most of the conversation > > taking place there as opposed to IRC. > > > > The continuation of this thread only confirms that there is already > > fragmentation in the community, and that people on each side of the > > void genuinely want to close that gap. At this point, the thing to do > > is prevent further fragmentation of the intent. It is, however, far > > easier to bikeshed over which platform of choice. > > > > At present, it seems a collective presence is forming ad hoc, > > regardless of any such resolution. With some additional coordination > > and planning, I think that there could be something that could scale > > beyond one or two outlets. > > > > Best, > > Samuel > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Sep 17 22:51:19 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 17 Sep 2018 16:51:19 -0600 Subject: [openstack-dev] [tc] notes from stein ptg meetings of the technical committee In-Reply-To: <019dad0e-8631-f6fe-e786-577f79f8edc6@gmail.com> References: <1537204771-sup-1590@lrrr.local> <5030a1e4-23cd-3cc9-1f89-e895efc7eb5b@gmail.com> <59f39dda-5113-7e8d-7402-8b1711d25f66@redhat.com> <1537216461-sup-8994@lrrr.local> <019dad0e-8631-f6fe-e786-577f79f8edc6@gmail.com> Message-ID: <1537223050-sup-9776@lrrr.local> Excerpts from Jay Pipes's message of 2018-09-17 17:07:43 -0400: > On 09/17/2018 04:50 PM, Doug Hellmann wrote: > > Excerpts from Zane Bitter's message of 2018-09-17 16:12:30 -0400: > >> On 17/09/18 3:06 PM, Jay Pipes wrote: > >>> On 09/17/2018 01:31 PM, Doug Hellmann wrote: > >>>> New Project Application Process > >>>> =============================== > >>>> > >>>> We wrapped up Sunday with a discussion of of our process for reviewing > >>>> new project applications. Zane and Chris in particular felt the > >>>> process for Adjutant was too painful for the project team because > >>>> there was no way to know how long discussions might go on and now > >>>> way for them to anticipate some of the issues they encountered. > >>>> > >>>> We talked about formalizing a "coach" position to have someone from > >>>> the TC (or broader community) work with the team to prepare their > >>>> application with sufficient detail, seek feedback before voting > >>>> starts, etc. > >>>> > >>>> We also talked about adding a time limit to the process, so that > >>>> teams at least have a rejection with feedback in a reasonable amount > >>>> of time.  Some of the less contentious discussions have averaged > >>>> from 1-4 months with a few more contentious cases taking as long > >>>> as 10 months. We did not settle on a time frame during the meeting, > >>>> so I expect this to be a topic for us to work out during the next > >>>> term. > >>> > >>> So, to summarize... the TC is back to almost exactly the same point it > >>> was at right before the Project Structure Reform happened in 2014-2015 > >>> (that whole Big Tent thing). > >> > >> I wouldn't go that far. There are more easy decisions than there were > >> before the reform, but there still exist hard decisions. This is perhaps > >> inevitable. > >> > >>> The Project Structure Reform occurred because the TC could not make > >>> decisions on whether projects should join OpenStack using objective > >>> criteria, and due to this, new project applicants were forced to endure > >>> long waits and subjective "graduation" reviews that could change from > >>> one TC election cycle to the next. > >>> > >>> The solution to this was to make an objective set of application > >>> criteria and remove the TC from the "Supreme Court of OpenStack" role > >>> that new applicants needed to come before and submit to the court's > >>> judgment. > >>> > >>> Many people complained that the Project Structure Reform was the TC > >>> simply abrogating responsibility for being a judgmental body. > >>> > >>> It seems that although we've now gotten rid of those objective criteria > >>> for project inclusion and gone back to the TC being a subjective > >>> judgmental body, that the TC is still not actually willing to pass > >>> judgment one way or the other on new project applicants. > >> > >> No criteria have been gotten rid of, but even after the Project > >> Structure Reform there existed criteria that were subjective. Here is a > >> thread discussing them during the last TC election: > >> > >> http://lists.openstack.org/pipermail/openstack-dev/2018-April/129622.html > >> > >> (I actually think that the perception that the criteria should be > >> entirely objective might be a contributor to the problem: when faced > >> with a subjective decision and no documentation or precedent to guide > >> them, TC members can be reluctant to choose.) > > > > I think turning the decision about which projects fit the mission > > into an entirely mechanical one would be a mistake. I would prefer > > us to use, and trust, our judgement in cases where the answer needs > > some thought. > > > > I don't remember the history quite the way Jay does, either. I > > remember us trying to base the decision more about what the team > > was doing than how the code looked or whether the implementation > > met anyone's idea of "good". That's why we retained the requirement > > that the project "aligns with the OpenStack Mission". > > Hmm. I very specifically remember the incubation and graduation review > of Zaqar and the fact that over a couple cycles of TC elections, the > "advice" given by the TC about specific technical implementation details > changed, often arbitrarily, depending on who was on the TC and what day > of the week it was. In fact, I pretty vividly remember this arbitrary > nature of the architectural review being one of the primary reasons we > switched to a purely objective set of criteria. I remember talking about objectivity, but I also remember that we stopped reviewing aspects of a project like it's architecture or implementation details to avoid having the case you describe recur. I remember that because I had a hard time coming around to that point of view, at first. You're correct, however, that the resolution we adopted as the first step toward the big tent change (https://governance.openstack.org/tc/resolutions/20141202-project-structure-reform-spec.html#recognize-all-our-community-is-a-part-of-openstack) does talk about making decisions based on team practices and projects fitting the mission as being objective requirements. And the patch that implemented the first part of the big tent change (https://review.openstack.org/#/c/145740/14) also talks about objectivity. It's interesting that we took different things away from the same discussion. :-) In any case, I think we've learned there is still quite a bit of subjectivity in the question about whether a project fits the mission. > Also, for the record, I actually wasn't referring to Adjutant > specifically when I referred in my original post to "only tangentially > related to cloud computing". I was referring to my recollection of > fairly recent history. I remember the seemingly endless debates about > whether some applicants "fit" the OpenStack ecosystem or whether the > applicant was merely trying to jump on a hype bandwagon for marketing > purposes. Again, I wasn't specifically referring to Adjutant here, so I > apologize if my words came across that way. This topic came up in the meeting because of the Adjutant evaluation taking so long. Doug From zhipengh512 at gmail.com Mon Sep 17 23:05:26 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 18 Sep 2018 07:05:26 +0800 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: <20180914204756.o5umojwxvypskwti@yuggoth.org> <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> Message-ID: Would like to see some updates on the foundation's official wechat group up :) On the other note, I would like to point out that this email is merely asking who would be interested. The question about "dividing teams" and such is addressed in https://review.openstack.org/602697 . On Tue, Sep 18, 2018 at 5:57 AM Lance Bragstad wrote: > > > On Mon, Sep 17, 2018 at 1:42 PM Mohammed Naser > wrote: > >> Hi, >> >> On that note, is there any way to get an 'invite' onto those channels? >> >> Any information about the foundation side of things about the >> 'official' channels? >> > > I actually have a question about this as well. During the TC discussion > last Friday there was representation from the Foundation in the room. I > though I remember someone (annabelleB?) saying there were known issues > (technical or otherwise) regarding the official channels spun up by the > Foundation. > > Does anyone know what issues were being referred to here? > > >> >> Thanks, >> Mohammed >> On Mon, Sep 17, 2018 at 3:28 PM Samuel Cassiba wrote: >> > >> > On Mon, Sep 17, 2018 at 6:58 AM Sylvain Bauza >> wrote: >> > > >> > > >> > > >> > > Le lun. 17 sept. 2018 à 15:32, Jeremy Stanley a >> écrit : >> > >> >> > >> On 2018-09-16 14:14:41 +0200 (+0200), Jean-philippe Evrard wrote: >> > >> [...] >> > >> > - What is the problem joining Wechat will solve (keeping in mind >> the >> > >> > language barrier)? >> > >> >> > >> As I understand it, the suggestion is that mere presence of project >> > >> leadership in venues where this emerging subset of our community >> > >> gathers would provide a strong signal that we support them and care >> > >> about their experience with the software. >> > >> >> > >> > - Isn't this problem already solved for other languages with >> > >> > existing initiatives like local ambassadors and i18n team? Why >> > >> > aren't these relevant? >> > >> [...] >> > >> >> > >> It seems like there are at least couple of factors at play here: >> > >> first the significant number of users and contributors within >> > >> mainland China compared to other regions (analysis suggests there >> > >> were nearly as many contributors to the Rocky release from China as >> > >> the USA), but second there may be facets of Chinese culture which >> > >> make this sort of demonstrative presence a much stronger signal than >> > >> it would be in other cultures. >> > >> >> > >> > - Pardon my ignorance here, what is the problem with email? (I >> > >> > understand some chat systems might be blocked, I thought emails >> > >> > would be fine, and the lowest common denominator). >> > >> >> > >> Someone in the TC room (forgive me, I don't recall who now, maybe >> > >> Rico?) asserted that Chinese contributors generally only read the >> > >> first message in any given thread (perhaps just looking for possible >> > >> announcements?) and that if they _do_ attempt to read through some >> > >> of the longer threads they don't participate in them because the >> > >> discussion is presumed to be over and decisions final by the time >> > >> they "reach the end" (I guess not realizing that it's perfectly fine >> > >> to reply to a month-old discussion and try to help alter course on >> > >> things if you have an actual concern?). >> > >> >> > > >> > > While I understand the technical issues that could be due using IRC >> in China, I still don't get why opening the gates and saying WeChat being >> yet another official channel would prevent our community from fragmenting. >> > > >> > > Truly the usage of IRC is certainly questionable, but if we have >> multiple ways to discuss, I just doubt we could prevent us to silo >> ourselves between our personal usages. >> > > Either we consider the new channels as being only for southbound >> communication, or we envisage the possibility, as a community, to migrate >> from IRC to elsewhere (I'm particulary not fan of the latter so I would >> challenge this but I can understand the reasons) >> > > >> > > -Sylvain >> > > >> > >> > Objectively, I don't see a way to endorse something other than IRC >> > without some form of collective presence on more than just Wechat to >> > keep the message intact. IRC is the official messaging platform, for >> > whatever that's worth these days. However, at present, it makes less >> > and less sense to explicitly eschew other outlets in favor. From a >> > Chef OpenStack perspective, the common medium is, perhaps not >> > unsurprising, code review. Everything else evolved over time to be >> > southbound paths to the code, including most of the conversation >> > taking place there as opposed to IRC. >> > >> > The continuation of this thread only confirms that there is already >> > fragmentation in the community, and that people on each side of the >> > void genuinely want to close that gap. At this point, the thing to do >> > is prevent further fragmentation of the intent. It is, however, far >> > easier to bikeshed over which platform of choice. >> > >> > At present, it seems a collective presence is forming ad hoc, >> > regardless of any such resolution. With some additional coordination >> > and planning, I think that there could be something that could scale >> > beyond one or two outlets. >> > >> > Best, >> > Samuel >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -- >> Mohammed Naser — vexxhost >> ----------------------------------------------------- >> D. 514-316-8872 >> D. 800-910-1726 ext. 200 >> E. mnaser at vexxhost.com >> W. http://vexxhost.com >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne at openstack.org Mon Sep 17 23:06:24 2018 From: anne at openstack.org (Anne Bertucio) Date: Mon, 17 Sep 2018 16:06:24 -0700 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: <20180914204756.o5umojwxvypskwti@yuggoth.org> <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> Message-ID: > I though I remember someone (annabelleB?) saying there were known issues (technical or otherwise) regarding the official channels spun up by the Foundation. Two separate issues that perhaps got mashed together :) Unofficial WeChat channels are limited to ~500 participants and are invite-only. That makes a few challenges for a community of our size (much more than 500!). Official subscription channels don’t have these limitations, but there’s a lengthy process to get one. It’s currently in progress (unfortunately I don’t think we have an ETA beyond “in progress” at this point—more than one month; less than six months?). Anne Bertucio OpenStack Foundation anne at openstack.org | irc: annabelleB > On Sep 17, 2018, at 2:56 PM, Lance Bragstad wrote: > > > > On Mon, Sep 17, 2018 at 1:42 PM Mohammed Naser > wrote: > Hi, > > On that note, is there any way to get an 'invite' onto those channels? > > Any information about the foundation side of things about the > 'official' channels? > > I actually have a question about this as well. During the TC discussion last Friday there was representation from the Foundation in the room. I though I remember someone (annabelleB?) saying there were known issues (technical or otherwise) regarding the official channels spun up by the Foundation. > > Does anyone know what issues were being referred to here? > > > Thanks, > Mohammed > On Mon, Sep 17, 2018 at 3:28 PM Samuel Cassiba > wrote: > > > > On Mon, Sep 17, 2018 at 6:58 AM Sylvain Bauza > wrote: > > > > > > > > > > > > Le lun. 17 sept. 2018 à 15:32, Jeremy Stanley > a écrit : > > >> > > >> On 2018-09-16 14:14:41 +0200 (+0200), Jean-philippe Evrard wrote: > > >> [...] > > >> > - What is the problem joining Wechat will solve (keeping in mind the > > >> > language barrier)? > > >> > > >> As I understand it, the suggestion is that mere presence of project > > >> leadership in venues where this emerging subset of our community > > >> gathers would provide a strong signal that we support them and care > > >> about their experience with the software. > > >> > > >> > - Isn't this problem already solved for other languages with > > >> > existing initiatives like local ambassadors and i18n team? Why > > >> > aren't these relevant? > > >> [...] > > >> > > >> It seems like there are at least couple of factors at play here: > > >> first the significant number of users and contributors within > > >> mainland China compared to other regions (analysis suggests there > > >> were nearly as many contributors to the Rocky release from China as > > >> the USA), but second there may be facets of Chinese culture which > > >> make this sort of demonstrative presence a much stronger signal than > > >> it would be in other cultures. > > >> > > >> > - Pardon my ignorance here, what is the problem with email? (I > > >> > understand some chat systems might be blocked, I thought emails > > >> > would be fine, and the lowest common denominator). > > >> > > >> Someone in the TC room (forgive me, I don't recall who now, maybe > > >> Rico?) asserted that Chinese contributors generally only read the > > >> first message in any given thread (perhaps just looking for possible > > >> announcements?) and that if they _do_ attempt to read through some > > >> of the longer threads they don't participate in them because the > > >> discussion is presumed to be over and decisions final by the time > > >> they "reach the end" (I guess not realizing that it's perfectly fine > > >> to reply to a month-old discussion and try to help alter course on > > >> things if you have an actual concern?). > > >> > > > > > > While I understand the technical issues that could be due using IRC in China, I still don't get why opening the gates and saying WeChat being yet another official channel would prevent our community from fragmenting. > > > > > > Truly the usage of IRC is certainly questionable, but if we have multiple ways to discuss, I just doubt we could prevent us to silo ourselves between our personal usages. > > > Either we consider the new channels as being only for southbound communication, or we envisage the possibility, as a community, to migrate from IRC to elsewhere (I'm particulary not fan of the latter so I would challenge this but I can understand the reasons) > > > > > > -Sylvain > > > > > > > Objectively, I don't see a way to endorse something other than IRC > > without some form of collective presence on more than just Wechat to > > keep the message intact. IRC is the official messaging platform, for > > whatever that's worth these days. However, at present, it makes less > > and less sense to explicitly eschew other outlets in favor. From a > > Chef OpenStack perspective, the common medium is, perhaps not > > unsurprising, code review. Everything else evolved over time to be > > southbound paths to the code, including most of the conversation > > taking place there as opposed to IRC. > > > > The continuation of this thread only confirms that there is already > > fragmentation in the community, and that people on each side of the > > void genuinely want to close that gap. At this point, the thing to do > > is prevent further fragmentation of the intent. It is, however, far > > easier to bikeshed over which platform of choice. > > > > At present, it seems a collective presence is forming ad hoc, > > regardless of any such resolution. With some additional coordination > > and planning, I think that there could be something that could scale > > beyond one or two outlets. > > > > Best, > > Samuel > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Sep 17 23:20:42 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 17 Sep 2018 18:20:42 -0500 Subject: [openstack-dev] [nova] When can/should we change additionalProperties=False in GET /servers(/detail)? In-Reply-To: <75ef2549-dfba-3267-5e76-0c59c64cd4ac@gmail.com> References: <70abbabe-2480-4c25-0665-a14b2eb5f3ab@gmail.com> <75ef2549-dfba-3267-5e76-0c59c64cd4ac@gmail.com> Message-ID: <96d456dd-c5de-b21c-fec7-6d485dcb331d@gmail.com> On 9/17/2018 3:06 PM, Jay Pipes wrote: > My vote would be just change additionalProperties to False in the 599276 > patch and be done with it. Well, it would be on a microversion boundary so the user would be opting into this stricter validation, but that's the point of microversions. So my custom API extension that handles GET /servers?bestpet=cats will continue to work as long as I'm using microversion < 2.66. -- Thanks, Matt From davanum at gmail.com Mon Sep 17 23:22:01 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Mon, 17 Sep 2018 19:22:01 -0400 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: <20180914204756.o5umojwxvypskwti@yuggoth.org> <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> Message-ID: On Mon, Sep 17, 2018 at 7:06 PM Anne Bertucio wrote: > I though I remember someone (annabelleB?) saying there were known issues > (technical or otherwise) regarding the official channels spun up by the > Foundation. > > > Two separate issues that perhaps got mashed together :) > > Unofficial WeChat channels are limited to ~500 participants and are > invite-only. That makes a few challenges for a community of our size (much > more than 500!). Official subscription channels don’t have these > limitations, but there’s a lengthy process to get one. It’s currently in > progress (unfortunately I don’t think we have an ETA beyond “in progress” > at this point—more than one month; less than six months?). > Weird! Isn't Tencent a platinum sponsor? Can we please ask them for help move this forward? (cc'ing Ruan He) -- Dims > > Anne Bertucio > OpenStack Foundation > anne at openstack.org | irc: annabelleB > > > > > > On Sep 17, 2018, at 2:56 PM, Lance Bragstad wrote: > > > > On Mon, Sep 17, 2018 at 1:42 PM Mohammed Naser > wrote: > >> Hi, >> >> On that note, is there any way to get an 'invite' onto those channels? >> >> Any information about the foundation side of things about the >> 'official' channels? >> > > I actually have a question about this as well. During the TC discussion > last Friday there was representation from the Foundation in the room. I > though I remember someone (annabelleB?) saying there were known issues > (technical or otherwise) regarding the official channels spun up by the > Foundation. > > Does anyone know what issues were being referred to here? > > >> >> Thanks, >> Mohammed >> On Mon, Sep 17, 2018 at 3:28 PM Samuel Cassiba wrote: >> > >> > On Mon, Sep 17, 2018 at 6:58 AM Sylvain Bauza >> wrote: >> > > >> > > >> > > >> > > Le lun. 17 sept. 2018 à 15:32, Jeremy Stanley a >> écrit : >> > >> >> > >> On 2018-09-16 14:14:41 +0200 (+0200), Jean-philippe Evrard wrote: >> > >> [...] >> > >> > - What is the problem joining Wechat will solve (keeping in mind >> the >> > >> > language barrier)? >> > >> >> > >> As I understand it, the suggestion is that mere presence of project >> > >> leadership in venues where this emerging subset of our community >> > >> gathers would provide a strong signal that we support them and care >> > >> about their experience with the software. >> > >> >> > >> > - Isn't this problem already solved for other languages with >> > >> > existing initiatives like local ambassadors and i18n team? Why >> > >> > aren't these relevant? >> > >> [...] >> > >> >> > >> It seems like there are at least couple of factors at play here: >> > >> first the significant number of users and contributors within >> > >> mainland China compared to other regions (analysis suggests there >> > >> were nearly as many contributors to the Rocky release from China as >> > >> the USA), but second there may be facets of Chinese culture which >> > >> make this sort of demonstrative presence a much stronger signal than >> > >> it would be in other cultures. >> > >> >> > >> > - Pardon my ignorance here, what is the problem with email? (I >> > >> > understand some chat systems might be blocked, I thought emails >> > >> > would be fine, and the lowest common denominator). >> > >> >> > >> Someone in the TC room (forgive me, I don't recall who now, maybe >> > >> Rico?) asserted that Chinese contributors generally only read the >> > >> first message in any given thread (perhaps just looking for possible >> > >> announcements?) and that if they _do_ attempt to read through some >> > >> of the longer threads they don't participate in them because the >> > >> discussion is presumed to be over and decisions final by the time >> > >> they "reach the end" (I guess not realizing that it's perfectly fine >> > >> to reply to a month-old discussion and try to help alter course on >> > >> things if you have an actual concern?). >> > >> >> > > >> > > While I understand the technical issues that could be due using IRC >> in China, I still don't get why opening the gates and saying WeChat being >> yet another official channel would prevent our community from fragmenting. >> > > >> > > Truly the usage of IRC is certainly questionable, but if we have >> multiple ways to discuss, I just doubt we could prevent us to silo >> ourselves between our personal usages. >> > > Either we consider the new channels as being only for southbound >> communication, or we envisage the possibility, as a community, to migrate >> from IRC to elsewhere (I'm particulary not fan of the latter so I would >> challenge this but I can understand the reasons) >> > > >> > > -Sylvain >> > > >> > >> > Objectively, I don't see a way to endorse something other than IRC >> > without some form of collective presence on more than just Wechat to >> > keep the message intact. IRC is the official messaging platform, for >> > whatever that's worth these days. However, at present, it makes less >> > and less sense to explicitly eschew other outlets in favor. From a >> > Chef OpenStack perspective, the common medium is, perhaps not >> > unsurprising, code review. Everything else evolved over time to be >> > southbound paths to the code, including most of the conversation >> > taking place there as opposed to IRC. >> > >> > The continuation of this thread only confirms that there is already >> > fragmentation in the community, and that people on each side of the >> > void genuinely want to close that gap. At this point, the thing to do >> > is prevent further fragmentation of the intent. It is, however, far >> > easier to bikeshed over which platform of choice. >> > >> > At present, it seems a collective presence is forming ad hoc, >> > regardless of any such resolution. With some additional coordination >> > and planning, I think that there could be something that could scale >> > beyond one or two outlets. >> > >> > Best, >> > Samuel >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -- >> Mohammed Naser — vexxhost >> ----------------------------------------------------- >> D. 514-316-8872 >> D. 800-910-1726 ext. 200 >> E. mnaser at vexxhost.com >> W. http://vexxhost.com >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Mon Sep 17 23:27:26 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 18 Sep 2018 07:27:26 +0800 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: <20180914204756.o5umojwxvypskwti@yuggoth.org> <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> Message-ID: Thanks Anne :) On another side note, there has not been a (maybe I missed) good "no" vote message I was looking for. A good quality "no" message could something like this in my opinion (and this is just one of many possibilities): "Thanks for invitation but no I would not like to use social media apps other than IRC. General social media apps are open to a more variety of people than in IRC channels and there are personalities or social characters that I found myself just cannot cope with. Moreover social media apps demands instant response, or at least has the expectation for one, in its nature and it would make me feel pressured all the time. So no thank you I would not use social media in any form, however if you do have good technical questions from the social apps, please feel free to relay to me and I'm glad to help when I have the bandwidth" On Tue, Sep 18, 2018 at 7:06 AM Anne Bertucio wrote: > I though I remember someone (annabelleB?) saying there were known issues > (technical or otherwise) regarding the official channels spun up by the > Foundation. > > > Two separate issues that perhaps got mashed together :) > > Unofficial WeChat channels are limited to ~500 participants and are > invite-only. That makes a few challenges for a community of our size (much > more than 500!). Official subscription channels don’t have these > limitations, but there’s a lengthy process to get one. It’s currently in > progress (unfortunately I don’t think we have an ETA beyond “in progress” > at this point—more than one month; less than six months?). > > > Anne Bertucio > OpenStack Foundation > anne at openstack.org | irc: annabelleB > > > > > > On Sep 17, 2018, at 2:56 PM, Lance Bragstad wrote: > > > > On Mon, Sep 17, 2018 at 1:42 PM Mohammed Naser > wrote: > >> Hi, >> >> On that note, is there any way to get an 'invite' onto those channels? >> >> Any information about the foundation side of things about the >> 'official' channels? >> > > I actually have a question about this as well. During the TC discussion > last Friday there was representation from the Foundation in the room. I > though I remember someone (annabelleB?) saying there were known issues > (technical or otherwise) regarding the official channels spun up by the > Foundation. > > Does anyone know what issues were being referred to here? > > >> >> Thanks, >> Mohammed >> On Mon, Sep 17, 2018 at 3:28 PM Samuel Cassiba wrote: >> > >> > On Mon, Sep 17, 2018 at 6:58 AM Sylvain Bauza >> wrote: >> > > >> > > >> > > >> > > Le lun. 17 sept. 2018 à 15:32, Jeremy Stanley a >> écrit : >> > >> >> > >> On 2018-09-16 14:14:41 +0200 (+0200), Jean-philippe Evrard wrote: >> > >> [...] >> > >> > - What is the problem joining Wechat will solve (keeping in mind >> the >> > >> > language barrier)? >> > >> >> > >> As I understand it, the suggestion is that mere presence of project >> > >> leadership in venues where this emerging subset of our community >> > >> gathers would provide a strong signal that we support them and care >> > >> about their experience with the software. >> > >> >> > >> > - Isn't this problem already solved for other languages with >> > >> > existing initiatives like local ambassadors and i18n team? Why >> > >> > aren't these relevant? >> > >> [...] >> > >> >> > >> It seems like there are at least couple of factors at play here: >> > >> first the significant number of users and contributors within >> > >> mainland China compared to other regions (analysis suggests there >> > >> were nearly as many contributors to the Rocky release from China as >> > >> the USA), but second there may be facets of Chinese culture which >> > >> make this sort of demonstrative presence a much stronger signal than >> > >> it would be in other cultures. >> > >> >> > >> > - Pardon my ignorance here, what is the problem with email? (I >> > >> > understand some chat systems might be blocked, I thought emails >> > >> > would be fine, and the lowest common denominator). >> > >> >> > >> Someone in the TC room (forgive me, I don't recall who now, maybe >> > >> Rico?) asserted that Chinese contributors generally only read the >> > >> first message in any given thread (perhaps just looking for possible >> > >> announcements?) and that if they _do_ attempt to read through some >> > >> of the longer threads they don't participate in them because the >> > >> discussion is presumed to be over and decisions final by the time >> > >> they "reach the end" (I guess not realizing that it's perfectly fine >> > >> to reply to a month-old discussion and try to help alter course on >> > >> things if you have an actual concern?). >> > >> >> > > >> > > While I understand the technical issues that could be due using IRC >> in China, I still don't get why opening the gates and saying WeChat being >> yet another official channel would prevent our community from fragmenting. >> > > >> > > Truly the usage of IRC is certainly questionable, but if we have >> multiple ways to discuss, I just doubt we could prevent us to silo >> ourselves between our personal usages. >> > > Either we consider the new channels as being only for southbound >> communication, or we envisage the possibility, as a community, to migrate >> from IRC to elsewhere (I'm particulary not fan of the latter so I would >> challenge this but I can understand the reasons) >> > > >> > > -Sylvain >> > > >> > >> > Objectively, I don't see a way to endorse something other than IRC >> > without some form of collective presence on more than just Wechat to >> > keep the message intact. IRC is the official messaging platform, for >> > whatever that's worth these days. However, at present, it makes less >> > and less sense to explicitly eschew other outlets in favor. From a >> > Chef OpenStack perspective, the common medium is, perhaps not >> > unsurprising, code review. Everything else evolved over time to be >> > southbound paths to the code, including most of the conversation >> > taking place there as opposed to IRC. >> > >> > The continuation of this thread only confirms that there is already >> > fragmentation in the community, and that people on each side of the >> > void genuinely want to close that gap. At this point, the thing to do >> > is prevent further fragmentation of the intent. It is, however, far >> > easier to bikeshed over which platform of choice. >> > >> > At present, it seems a collective presence is forming ad hoc, >> > regardless of any such resolution. With some additional coordination >> > and planning, I think that there could be something that could scale >> > beyond one or two outlets. >> > >> > Best, >> > Samuel >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -- >> Mohammed Naser — vexxhost >> ----------------------------------------------------- >> D. 514-316-8872 >> D. 800-910-1726 ext. 200 >> E. mnaser at vexxhost.com >> W. http://vexxhost.com >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From swamireddy at gmail.com Mon Sep 17 23:48:39 2018 From: swamireddy at gmail.com (M Ranga Swami Reddy) Date: Tue, 18 Sep 2018 05:18:39 +0530 Subject: [openstack-dev] GUI for Swift object storage Message-ID: Hi - is there any GUI (open source) available for Swift objects storage? Thanks Swa -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Sep 18 00:06:35 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 18 Sep 2018 00:06:35 +0000 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: <20180914204756.o5umojwxvypskwti@yuggoth.org> <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> Message-ID: <20180918000635.li7futcz5tyc43k2@yuggoth.org> On 2018-09-18 07:27:26 +0800 (+0800), Zhipeng Huang wrote: [...] > On another side note, there has not been a (maybe I missed) good > "no" vote message I was looking for. > > A good quality "no" message could something like this in my > opinion (and this is just one of many possibilities): > > "Thanks for invitation but no I would not like to use social media > apps other than IRC. General social media apps are open to a more > variety of people than in IRC channels and there are personalities > or social characters that I found myself just cannot cope with. > Moreover social media apps demands instant response, or at least > has the expectation for one, in its nature and it would make me > feel pressured all the time. So no thank you I would not use > social media in any form, however if you do have good technical > questions from the social apps, please feel free to relay to me > and I'm glad to help when I have the bandwidth" This seems to completely miss the reasons I personally reject such platforms. I don't use proprietary tools or services for interacting with our community, I avoid relying on products from companies which attempt to track or share my location and activities for their own profit or to report them to various government agencies, and I have no interest in owning a "smart phone" (which some services such as wechat outright require). At least for me, the example rejection you proposed above is a miss on all counts... I am plenty capable of coping with any of the "personalities or social characters" I'm likely to encounter on those services (I'm quite certain IRC is where you're going to find some of the worst and *most* intolerable characters of any discussion medium). I also am accustomed to providing a near instant response to urgent requests in IRC, and at the same time don't feel particularly "pressured" to do so. And I _do_ use some social media: both IRC and mailing lists are in fact legitimate examples of social media. I'm really not even opposed to using other forms as long as they rely on open protocols and both the client _and_ server software are available under a free/libre open source license. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From soulxu at gmail.com Tue Sep 18 00:33:30 2018 From: soulxu at gmail.com (Alex Xu) Date: Tue, 18 Sep 2018 08:33:30 +0800 Subject: [openstack-dev] [nova] When can/should we change additionalProperties=False in GET /servers(/detail)? In-Reply-To: <75ef2549-dfba-3267-5e76-0c59c64cd4ac@gmail.com> References: <70abbabe-2480-4c25-0665-a14b2eb5f3ab@gmail.com> <75ef2549-dfba-3267-5e76-0c59c64cd4ac@gmail.com> Message-ID: That only means after 599276 we only have servers API and os-instance-action API stopped accepting the undefined query parameter. What I'm thinking about is checking all the APIs, add json-query-param checking with additionalProperties=True if the API don't have yet. And using another microversion set additionalProperties to False, then the whole Nova API become consistent. Jay Pipes 于2018年9月18日周二 上午4:07写道: > On 09/17/2018 03:28 PM, Matt Riedemann wrote: > > This is a question from a change [1] which adds a new changes-before > > filter to the servers, os-instance-actions and os-migrations APIs. > > > > For context, the os-instance-actions API stopped accepting undefined > > query parameters in 2.58 when we added paging support. > > > > The os-migrations API stopped allowing undefined query parameters in > > 2.59 when we added paging support. > > > > The open question on the review is if we should change GET /servers and > > GET /servers/detail to stop allowing undefined query parameters starting > > with microversion 2.66 [2]. Apparently when we added support for 2.5 and > > 2.26 for listing servers we didn't think about this. It means that a > > user can specify a query parameter, documented in the API reference, but > > with an older microversion and it will be silently ignored. That is > > backward compatible but confusing from an end user perspective since it > > would appear to them that the filter is not being applied, when it fact > > it would be if they used the correct microversion. > > > > So do we want to start enforcing query parameters when listing servers > > to our defined list with microversion 2.66 or just continue to silently > > ignore them if used incorrectly? > > > > Note that starting in Rocky, the Neutron API will start rejecting > > unknown query parameteres [3] if the filter-validation extension is > > enabled (since Neutron doesn't use microversions). So there is some > > precedent in OpenStack for starting to enforce query parameters. > > > > [1] https://review.openstack.org/#/c/599276/ > > [2] > > > https://review.openstack.org/#/c/599276/23/nova/api/openstack/compute/schemas/servers.py > > > > [3] > > https://docs.openstack.org/releasenotes/neutron/rocky.html#upgrade-notes > > My vote would be just change additionalProperties to False in the 599276 > patch and be done with it. > > Add a release note about the change, of course. > > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at not.mn Tue Sep 18 01:11:22 2018 From: me at not.mn (John Dickinson) Date: Mon, 17 Sep 2018 18:11:22 -0700 Subject: [openstack-dev] GUI for Swift object storage In-Reply-To: References: Message-ID: That's a great question. A quick google search shows a few like Swift Explorer, Cyberduck, and Gladinet. But since Swift supports the S3 API (check with your cluster operator to see if this is enabled, or examine the results of a `GET /info` request), you can use any available S3 GUI client as well (as long as you can configure the endpoints you connect to). --John On 17 Sep 2018, at 16:48, M Ranga Swami Reddy wrote: > Hi - is there any GUI (open source) available for Swift objects > storage? > > Thanks > Swa > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Tue Sep 18 01:58:36 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 18 Sep 2018 09:58:36 +0800 Subject: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> Message-ID: Hope you all safely travel back to home now. Here is the summarize from some discussions (as much as I can trigger or attend) in PTG for SIGs/WGs expose and some idea for action, http://lists.openstack.org/pipermail/openstack-dev/2018-September/134689.html I also like the idea to at least expose the information of SIGs/WGs right away. Feel free to give your feedback. And not like the following message matters to anyone, but just in case. I believe this is a goal for all group in the community so just don't let who your duty, position, or full hand of good tasks to limit what you think about the relative of this goal with you. Give your positive or negative opinions to help us get a better shape. On Wed, Sep 12, 2018 at 11:47 PM Matt Riedemann wrote: > Rather than take a tangent on Kristi's candidacy thread [1], I'll bring > this up separately. > > Kristi said: > > "Ultimately, this list isn’t exclusive and I’d love to hear your and > other people's opinions about what you think the I should focus on." > > Well since you asked... > > Some feedback I gave to the public cloud work group yesterday was to get > their RFE/bug list ranked from the operator community (because some of > the requests are not exclusive to public cloud), and then put pressure > on the TC to help project manage the delivery of the top issue. I would > like all of the SIGs to do this. The upgrades SIG should rank and > socialize their #1 issue that needs attention from the developer > community - maybe that's better upgrade CI testing for deployment > projects, maybe it's getting the pre-upgrade checks goal done for Stein. > The UC should also be doing this; maybe that's the UC saying, "we need > help on closing feature gaps in openstack client and/or the SDK". I > don't want SIGs to bombard the developers with *all* of their > requirements, but I want to get past *talking* about the *same* issues > *every* time we get together. I want each group to say, "this is our top > issue and we want developers to focus on it." For example, the extended > maintenance resolution [2] was purely birthed from frustration about > talking about LTS and stable branch EOL every time we get together. It's > also the responsibility of the operator and user communities to weigh in > on proposed release goals, but the TC should be actively trying to get > feedback from those communities about proposed goals, because I bet > operators and users don't care about mox removal [3]. > > I want to see the TC be more of a cross-project project management > group, like a group of Ildikos and what she did between nova and cinder > to get volume multi-attach done, which took persistent supervision to > herd the cats and get it delivered. Lance is already trying to do this > with unified limits. Doug is doing this with the python3 goal. I want my > elected TC members to be pushing tangible technical deliverables forward. > > I don't find any value in the TC debating ad nauseam about visions and > constellations and "what is openstack?". Scope will change over time > depending on who is contributing to openstack, we should just accept > this. And we need to realize that if we are failing to deliver value to > operators and users, they aren't going to use openstack and then "what > is openstack?" won't matter because no one will care. > > So I encourage all elected TC members to work directly with the various > SIGs to figure out their top issue and then work on managing those > deliverables across the community because the TC is particularly well > suited to do so given the elected position. I realize political and > bureaucratic "how should openstack deal with x?" things will come up, > but those should not be the priority of the TC. So instead of > philosophizing about things like, "should all compute agents be in a > single service with a REST API" for hours and hours, every few months - > immediately ask, "would doing that get us any closer to achieving top > technical priority x?" Because if not, or it's so fuzzy in scope that no > one sees the way forward, document a decision and then drop it. > > [1] > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134490.html > [2] > > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html > [3] https://governance.openstack.org/tc/goals/rocky/mox_removal.html > > -- > > Thanks, > > Matt > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue Sep 18 02:23:33 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 18 Sep 2018 10:23:33 +0800 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: <20180918000635.li7futcz5tyc43k2@yuggoth.org> References: <20180914204756.o5umojwxvypskwti@yuggoth.org> <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> <20180918000635.li7futcz5tyc43k2@yuggoth.org> Message-ID: On Tue, Sep 18, 2018 at 8:06 AM Jeremy Stanley wrote: > This seems to completely miss the reasons I personally reject such > platforms. I don't use proprietary tools or services for interacting > with our community, I avoid relying on products from companies which > attempt to track or share my location and activities for their own > profit or to report them to various government agencies, and I have > no interest in owning a "smart phone" (which some services such as > wechat outright require). At least for me, the example rejection you > proposed above is a miss on all counts... > > I am plenty capable of coping with any of the "personalities or > social characters" I'm likely to encounter on those services (I'm > quite certain IRC is where you're going to find some of the worst > and *most* intolerable characters of any discussion medium). I also > am accustomed to providing a near instant response to urgent > requests in IRC, and at the same time don't feel particularly > "pressured" to do so. And I _do_ use some social media: both IRC and > mailing lists are in fact legitimate examples of social media. I'm > really not even opposed to using other forms as long as they rely on > open protocols and both the client _and_ server software are > available under a free/libre open source license. > -- > Jeremy Stanley > Jeremy, what I'm saying here, and also addressed in comments with the related resolution patch, is that personality reasons are the ones that we have to respect and no form of governance change could help solve the problem. However other than that, we could always find a way to address the issue for remedies, if we don't have a good answer now maybe we will have sometime later. Preference on social tooling is something that the technical committee is able to address, with isolation of usage of proprietary tools for certain scenario and also strict policy on enforcing the open source communication solutions we have today as the central ones the community will continue to use. This is not an unsolvable problem given that we have a technical committee, but personality issues are, no matter what governance instrument we have. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Sep 18 02:26:57 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 18 Sep 2018 11:26:57 +0900 Subject: [openstack-dev] [Openstack-operators] [tc]Global Reachout Proposal In-Reply-To: References: Message-ID: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> ---- On Sat, 15 Sep 2018 02:49:40 +0900 Zhipeng Huang wrote ---- > Hi all, > Follow up the diversity discussion we had in the tc session this morning [0], I've proposed a resolution on facilitating technical community in large to engage in global reachout for OpenStack more efficiently. > Your feedbacks are welcomed. Whether this should be a new resolution or not at the end of the day, this is a conversation worthy to have. > [0] https://review.openstack.org/602697 I like that we are discussing the Global Reachout things which i personally feel is very important. There are many obstacle to have a standard global communication way. Honestly saying, there cannot be any standard communication channel which can accommodate different language, cultures , company/govt restriction. So the better we can do is best solution. I can understand that IRC cannot be used in China which is very painful and mostly it is used weChat. But there are few key points we need to consider for any social app to use? - Technical discussions which needs more people to participate and need ref of links etc cannot be done on mobile app. You need desktop version of that app. - Many of the social app have # of participation, invitation, logging restriction. - Those apps are not restricted to other place. - It does not split the community members among more than one app or exiting channel. With all those point, we need to think what all communication channel we really want to promote as community. IMO, we should educate and motivate people to participate over existing channel like IRC, ML as much as possible. At least ML does not have any issue about usage. Ambassador and local user groups people can play a critical role here or local developers (i saw Alex volunteer for nova discussion in china) and they can ask them to start communication in ML or if they cannot then they can start the thread and proxy for them. I know slack is being used for Japan community and most of the communication there is in Japanese so i cannot help there even I join it. When talking to Akira (Japan Ambassador ) and as per him most of the developers do communicate in IRC, ML but users hesitate to do so because of culture and language. So if proposal is to participate community (Developers, TC, UC, Ambassador, User Group members etc) in local chat app and encourage people to move to ML etc then it is great idea. But if we want to promote all different chat app as community practice then, it can lead to lot of other problems than solving the current one. For example: It will divide the technical discussion etc -gmann > -- > Zhipeng (Howard) Huang > Standard EngineerIT Standard & Patent/IT Product LineHuawei Technologies Co,. LtdEmail: huangzhipeng at huawei.comOffice: Huawei Industrial Base, Longgang, Shenzhen > (Previous) > Research AssistantMobile Ad-Hoc Network Lab, Calit2University of California, IrvineEmail: zhipengh at uci.eduOffice: Calit2 Building Room 2402 > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From swamireddy at gmail.com Tue Sep 18 02:31:16 2018 From: swamireddy at gmail.com (M Ranga Swami Reddy) Date: Tue, 18 Sep 2018 08:01:16 +0530 Subject: [openstack-dev] GUI for Swift object storage In-Reply-To: References: Message-ID: All GUI tools are non open source...need to pay like cyberduck etc. Looking for open source GUI for Swift API access. On Tue, 18 Sep 2018, 06:41 John Dickinson, wrote: > That's a great question. > > A quick google search shows a few like Swift Explorer, Cyberduck, and > Gladinet. But since Swift supports the S3 API (check with your cluster > operator to see if this is enabled, or examine the results of a GET /info > request), you can use any available S3 GUI client as well (as long as you > can configure the endpoints you connect to). > > --John > > On 17 Sep 2018, at 16:48, M Ranga Swami Reddy wrote: > > Hi - is there any GUI (open source) available for Swift objects storage? > > Thanks > Swa > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Sep 18 02:41:14 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 18 Sep 2018 11:41:14 +0900 Subject: [openstack-dev] [nova] When can/should we change additionalProperties=False in GET /servers(/detail)? In-Reply-To: References: <70abbabe-2480-4c25-0665-a14b2eb5f3ab@gmail.com> <75ef2549-dfba-3267-5e76-0c59c64cd4ac@gmail.com> Message-ID: <165ea8d9f10.add97103175992.5456929857422374986@ghanshyammann.com> ---- On Tue, 18 Sep 2018 09:33:30 +0900 Alex Xu wrote ---- > That only means after 599276 we only have servers API and os-instance-action API stopped accepting the undefined query parameter. > What I'm thinking about is checking all the APIs, add json-query-param checking with additionalProperties=True if the API don't have yet. And using another microversion set additionalProperties to False, then the whole Nova API become consistent. I too vote for doing it for all other API together. Restricting the unknown query or request param are very useful for API consistency. Item#1 in this etherpad https://etherpad.openstack.org/p/nova-api-cleanup If you would like, i can propose a quick spec for that and positive response to do all together then we skip to do that in 599276 otherwise do it for GET servers in 599276. -gmann > Jay Pipes 于2018年9月18日周二 上午4:07写道: > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > On 09/17/2018 03:28 PM, Matt Riedemann wrote: > > This is a question from a change [1] which adds a new changes-before > > filter to the servers, os-instance-actions and os-migrations APIs. > > > > For context, the os-instance-actions API stopped accepting undefined > > query parameters in 2.58 when we added paging support. > > > > The os-migrations API stopped allowing undefined query parameters in > > 2.59 when we added paging support. > > > > The open question on the review is if we should change GET /servers and > > GET /servers/detail to stop allowing undefined query parameters starting > > with microversion 2.66 [2]. Apparently when we added support for 2.5 and > > 2.26 for listing servers we didn't think about this. It means that a > > user can specify a query parameter, documented in the API reference, but > > with an older microversion and it will be silently ignored. That is > > backward compatible but confusing from an end user perspective since it > > would appear to them that the filter is not being applied, when it fact > > it would be if they used the correct microversion. > > > > So do we want to start enforcing query parameters when listing servers > > to our defined list with microversion 2.66 or just continue to silently > > ignore them if used incorrectly? > > > > Note that starting in Rocky, the Neutron API will start rejecting > > unknown query parameteres [3] if the filter-validation extension is > > enabled (since Neutron doesn't use microversions). So there is some > > precedent in OpenStack for starting to enforce query parameters. > > > > [1] https://review.openstack.org/#/c/599276/ > > [2] > > https://review.openstack.org/#/c/599276/23/nova/api/openstack/compute/schemas/servers.py > > > > [3] > > https://docs.openstack.org/releasenotes/neutron/rocky.html#upgrade-notes > > My vote would be just change additionalProperties to False in the 599276 > patch and be done with it. > > Add a release note about the change, of course. > > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From rico.lin.guanyu at gmail.com Tue Sep 18 03:27:29 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 18 Sep 2018 11:27:29 +0800 Subject: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration Message-ID: *TL;DR* *How about a forum in Berlin for discussing autoscaling integration (as a long-term goal) in OpenStack?* Hi all, as we start to discuss how can we join develop from Heat and Senlin as we originally planned when we decided to fork Senlin from Heat long time ago. IMO the biggest issues we got now are we got users using autoscaling in both services, appears there is a lot of duplicated effort, and some great enhancement didn't exist in another service. As a long-term goal (from the beginning), we should try to join development to sync functionality, and move users to use Senlin for autoscaling. So we should start to review this goal, or at least we should try to discuss how can we help users without break or enforce anything. What will be great if we can build common library cross projects, and use that common library in both projects, make sure we have all improvement implemented in that library, finally to use Senlin from that from that library call in Heat autoscaling group. And in long-term, we gonna let all user use more general way instead of multiple ways but generate huge confusing for users. *As an action, I propose we have a forum in Berlin and sync up all effort from both teams to plan for idea scenario design. The forum submission [1] ended at 9/26.* Also would benefit from both teams to start to think about how they can modulize those functionalities for easier integration in the future. >From some Heat PTG sessions, we keep bring out ideas on how can we improve current solutions for Autoscaling. We should start to talk about will it make sense if we combine all group resources into one, and inherit from it for other resources (ideally gonna deprecate rest resource types). Like we can do Batch create/delete in Resource Group, but not in ASG. We definitely got some unsynchronized works inner Heat, and cross Heat and Senlin. Please let me know who is interesting in this idea, so we can work together and reach our goal step by step. Also please provide though if you got any concerns about this proposal. [1] https://www.openstack.org/summit/berlin-2018/call-for-presentations -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From geng.changcai2 at zte.com.cn Tue Sep 18 05:34:18 2018 From: geng.changcai2 at zte.com.cn (geng.changcai2 at zte.com.cn) Date: Tue, 18 Sep 2018 13:34:18 +0800 (CST) Subject: [openstack-dev] =?utf-8?q?_=5BFreezer=5D_Freezer_support_relation?= =?utf-8?q?al_db_and_osbrick?= Message-ID: <201809181334187587929@zte.com.cn> Hi,Saad Zaher: 1) Implementing Sqlalchemy driver for freezer-api: At present, there is part 1 function in implementing Sqlalchemy driver for freezer-api in https://review.openstack.org/#/c/539077/. Is there part 2 function? If you have an uncommitted patches, can you share it? I'd like to make it more complete so that it can be used. If not, I plan to make a BP, which has been completed. 2) osbrick: There are many problems, such as multi partitions and LVM partition is not supported, etc. If there is no development to complete, if so, I would like to mention a BP to complete it. Best regards, gengchc2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at canonical.com Tue Sep 18 06:36:05 2018 From: james.page at canonical.com (James Page) Date: Tue, 18 Sep 2018 00:36:05 -0600 Subject: [openstack-dev] [charms] retiring the ceph charm Message-ID: Hi All We deprecated and stopped releasing the ceph charm a few cycles back in preference to the split ceph-osd/ceph-mon charms; consider this official notification of retirement! Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Tue Sep 18 06:40:36 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Tue, 18 Sep 2018 08:40:36 +0200 Subject: [openstack-dev] [kolla] Stein forum topics proposal Message-ID: Hi, Berlin forum brainstorming has started, please add your proposal topics before 26th september https://etherpad.openstack.org/p/kolla-forum-stein Forum topics are proposed the same method as Summit presentations https://www.openstack.org/summit/berlin-2018/call-for-presentations Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Sep 18 09:58:48 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 18 Sep 2018 11:58:48 +0200 Subject: [openstack-dev] [tc] notes from stein ptg meetings of the technical committee In-Reply-To: <1537223050-sup-9776@lrrr.local> References: <1537204771-sup-1590@lrrr.local> <5030a1e4-23cd-3cc9-1f89-e895efc7eb5b@gmail.com> <59f39dda-5113-7e8d-7402-8b1711d25f66@redhat.com> <1537216461-sup-8994@lrrr.local> <019dad0e-8631-f6fe-e786-577f79f8edc6@gmail.com> <1537223050-sup-9776@lrrr.local> Message-ID: <8142b212-0b40-2d47-b1f9-1ac49a79ffb1@openstack.org> Doug Hellmann wrote: > Excerpts from Jay Pipes's message of 2018-09-17 17:07:43 -0400: >> On 09/17/2018 04:50 PM, Doug Hellmann wrote: >>> [...] >>> I don't remember the history quite the way Jay does, either. I >>> remember us trying to base the decision more about what the team >>> was doing than how the code looked or whether the implementation >>> met anyone's idea of "good". That's why we retained the requirement >>> that the project "aligns with the OpenStack Mission". >> >> Hmm. I very specifically remember the incubation and graduation review >> of Zaqar and the fact that over a couple cycles of TC elections, the >> "advice" given by the TC about specific technical implementation details >> changed, often arbitrarily, depending on who was on the TC and what day >> of the week it was. In fact, I pretty vividly remember this arbitrary >> nature of the architectural review being one of the primary reasons we >> switched to a purely objective set of criteria. > > I remember talking about objectivity, but I also remember that we > stopped reviewing aspects of a project like it's architecture or > implementation details to avoid having the case you describe recur. > I remember that because I had a hard time coming around to that > point of view, at first. > > You're correct, however, that the resolution we adopted as the first > step toward the big tent change > (https://governance.openstack.org/tc/resolutions/20141202-project-structure-reform-spec.html#recognize-all-our-community-is-a-part-of-openstack) > does talk about making decisions based on team practices and projects > fitting the mission as being objective requirements. And the patch > that implemented the first part of the big tent change > (https://review.openstack.org/#/c/145740/14) also talks about > objectivity. > > It's interesting that we took different things away from the same > discussion. :-) > > In any case, I think we've learned there is still quite a bit of > subjectivity in the question about whether a project fits the > mission. Right. Back then our goal was definitely to remove the most subjective requirements. We removed judgment on whether the project was a good idea, or whether the technical architecture was sound, or whether the project was "mature" enough. We only kept two criteria: alignment with the OpenStack culture, and alignment with the OpenStack mission. Those are not purely objective criteria though. We had cases where we had to do a leap of faith whether the project really aligns with the OpenStack culture. And we had projects that were in a grey area even with our very vague mission statement. The Adjutant discussion, in the end, was about whether it would significantly hurt interoperability, and therefore be detrimental to the OpenStack mission rather than helping it. -- Thierry Carrez (ttx) From mark at stackhpc.com Tue Sep 18 10:41:53 2018 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 18 Sep 2018 11:41:53 +0100 Subject: [openstack-dev] [kayobe] Stein forum topics Message-ID: Hi, Brainstorming for the Stein forum in Berlin has started, please add your proposal topics before 26th September to https://etherpad.openstack.org/p/kayobe-stein-forum. Forum topics are proposed using the same CFP method as Summit presentations: https://www.openstack.org/summit/berlin-2018/call-for-presentations Cheers, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at citynetwork.eu Tue Sep 18 12:05:11 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Tue, 18 Sep 2018 14:05:11 +0200 Subject: [openstack-dev] [publiccloud-wg] Meeting tomorrow Message-ID: <70976fdd-3d0f-dafa-a792-4cb4daf96af1@citynetwork.eu> Hi everyone, Don't forget that we have a meeting tomorrow at 0700 UTC at IRC channel #openstack-publiccloud. See you all there! Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From dtantsur at redhat.com Tue Sep 18 12:20:54 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 18 Sep 2018 14:20:54 +0200 Subject: [openstack-dev] Bumping eventlet to 0.24.1 In-Reply-To: <20180906065248.m73g3nhsv4v3imkv@gentoo.org> References: <20180823145013.vzt46kgd7d7lkmkj@gentoo.org> <20180906065248.m73g3nhsv4v3imkv@gentoo.org> Message-ID: On 9/6/18 8:52 AM, Matthew Thode wrote: > On 18-08-23 09:50:13, Matthew Thode wrote: >> This is your warning, if you have concerns please comment in >> https://review.openstack.org/589382 . cross tests pass, so that's a >> good sign... atm this is only for stein. >> > > I pushed the big red button. Ironic grenade might have been broken by this change: https://bugs.launchpad.net/neutron/+bug/1792925. No clear evidence, but that seems to be the only suspect. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Tue Sep 18 12:32:49 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 18 Sep 2018 12:32:49 +0000 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> <20180918000635.li7futcz5tyc43k2@yuggoth.org> Message-ID: <20180918123249.2hxgyqjevcgpbav4@yuggoth.org> On 2018-09-18 10:23:33 +0800 (+0800), Zhipeng Huang wrote: [...] > Jeremy, what I'm saying here, and also addressed in comments with > the related resolution patch, is that personality reasons are the > ones that we have to respect and no form of governance change > could help solve the problem. However other than that, we could > always find a way to address the issue for remedies, if we don't > have a good answer now maybe we will have sometime later. > > Preference on social tooling is something that the technical > committee is able to address, with isolation of usage of > proprietary tools for certain scenario and also strict policy on > enforcing the open source communication solutions we have today as > the central ones the community will continue to use. This is not > an unsolvable problem given that we have a technical committee, > but personality issues are, no matter what governance instrument > we have. Once again, I think we're talking past each other. I was replying to (and quoted from) the provided sample rejection letter. First I wanted to point out that I had already rejected the premise earlier on this thread even though it was suggested that no rejection had yet been provided. Second, the sample letter seemed to indicate what I believe to be a fundamental misunderstanding among those pushing this issue: the repeated attempts I've seen so far to paint a disinterest in participating in wechat interactions as mere "personal preference," and the idea that those who hold this "preference" are somehow weak or afraid of the people they'll encounter there. For me, it borders on insulting. I (and I believe many others) have strong ideological opposition to participating in these forums, not mere personal preferences. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Tue Sep 18 12:40:50 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 18 Sep 2018 12:40:50 +0000 Subject: [openstack-dev] [Openstack-sigs] [Openstack-operators] [tc]Global Reachout Proposal In-Reply-To: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> Message-ID: <20180918124049.jw7xbufikxfx3w37@yuggoth.org> On 2018-09-18 11:26:57 +0900 (+0900), Ghanshyam Mann wrote: [...] > I can understand that IRC cannot be used in China which is very > painful and mostly it is used weChat. [...] I have yet to hear anyone provide first-hand confirmation that access to Freenode's IRC servers is explicitly blocked by the mainland Chinese government. There has been a lot of speculation that the usual draconian corporate firewall policies (surprise, the rest of the World gets to struggle with those too, it's not just a problem in China) are blocking a variety of messaging protocols from workplace networks and the people who encounter this can't tell the difference because they're already accustomed to much of their other communications being blocked at the border. I too have heard from someone who's heard from someone that "IRC can't be used in China" but the concrete reasons why continue to be missing from these discussions. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From witold.bedyk at est.fujitsu.com Tue Sep 18 12:42:47 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Tue, 18 Sep 2018 12:42:47 +0000 Subject: [openstack-dev] [ptg] Post-lunch presentations schedule In-Reply-To: References: Message-ID: <58faffcf366246529c39ea680776df66@R01UKEXCASM126.r01.fujitsu.local> Stephen, could you please share your presentation slides? Thanks Witek > -----Original Message----- > From: Thierry Carrez > Sent: Freitag, 24. August 2018 11:21 > To: OpenStack Development Mailing List dev at lists.openstack.org> > Subject: [openstack-dev] [ptg] Post-lunch presentations schedule > Friday: Lightning talks > Fast-paced 5-min segments to talk about anything... Summaries of team > plans for Stein encouraged. A presentation of Sphinx in OpenStack by > stephenfin will open the show. From sylvain.bauza at gmail.com Tue Sep 18 12:52:28 2018 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Tue, 18 Sep 2018 14:52:28 +0200 Subject: [openstack-dev] [Openstack-sigs] [Openstack-operators] [tc]Global Reachout Proposal In-Reply-To: <20180918124049.jw7xbufikxfx3w37@yuggoth.org> References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> <20180918124049.jw7xbufikxfx3w37@yuggoth.org> Message-ID: Le mar. 18 sept. 2018 à 14:41, Jeremy Stanley a écrit : > On 2018-09-18 11:26:57 +0900 (+0900), Ghanshyam Mann wrote: > [...] > > I can understand that IRC cannot be used in China which is very > > painful and mostly it is used weChat. > [...] > > I have yet to hear anyone provide first-hand confirmation that > access to Freenode's IRC servers is explicitly blocked by the > mainland Chinese government. There has been a lot of speculation > that the usual draconian corporate firewall policies (surprise, the > rest of the World gets to struggle with those too, it's not just a > problem in China) are blocking a variety of messaging protocols from > workplace networks and the people who encounter this can't tell the > difference because they're already accustomed to much of their other > communications being blocked at the border. I too have heard from > someone who's heard from someone that "IRC can't be used in China" > but the concrete reasons why continue to be missing from these > discussions. > Thanks fungi, that's the crux of the problem I'd like to see discussed in the governance change. In this change, it states the non-use of existing and official communication tools as to be "cumbersome". See my comment on PS1, I thought the original concern was technical. Why are we discussing about WeChat now ? Is that because a large set of our contributors *can't* access IRC or because they *prefer* any other ? In the past, we made clear for a couple of times why IRC is our communication channel. I don't see those reasons to be invalid now, but I'm still open to understand the problems about why our community becomes de facto fragmented. -Sylvain > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clay.gerrard at gmail.com Tue Sep 18 13:38:27 2018 From: clay.gerrard at gmail.com (Clay Gerrard) Date: Tue, 18 Sep 2018 08:38:27 -0500 Subject: [openstack-dev] GUI for Swift object storage In-Reply-To: References: Message-ID: I don't know about a good open source cross-platform GUI client, but the SwiftStack one is slick and doesn't seem to be behind a paywall (yet?) https://www.swiftstack.com/downloads There's probably some proprietary integration that won't make sense - but it should work with any Swift end-point. Let me know how it goes! -Clay N.B. IANAL, so you should probably double check the license/terms if you're planning on doing anything more sophisticated than personal use. On Mon, Sep 17, 2018 at 9:31 PM M Ranga Swami Reddy wrote: > All GUI tools are non open source...need to pay like cyberduck etc. > Looking for open source GUI for Swift API access. > > On Tue, 18 Sep 2018, 06:41 John Dickinson, wrote: > >> That's a great question. >> >> A quick google search shows a few like Swift Explorer, Cyberduck, and >> Gladinet. But since Swift supports the S3 API (check with your cluster >> operator to see if this is enabled, or examine the results of a GET /info >> request), you can use any available S3 GUI client as well (as long as you >> can configure the endpoints you connect to). >> >> --John >> >> On 17 Sep 2018, at 16:48, M Ranga Swami Reddy wrote: >> >> Hi - is there any GUI (open source) available for Swift objects storage? >> >> Thanks >> Swa >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Sep 18 13:57:58 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 18 Sep 2018 13:57:58 +0000 Subject: [openstack-dev] [Openstack-sigs] [Openstack-operators] [tc]Global Reachout Proposal In-Reply-To: References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> <20180918124049.jw7xbufikxfx3w37@yuggoth.org> Message-ID: <20180918135758.2h6fqhwc3ika3xpf@yuggoth.org> On 2018-09-18 14:52:28 +0200 (+0200), Sylvain Bauza wrote: [...] > Why are we discussing about WeChat now? Is that because a large > set of our contributors *can't* access IRC or because they > *prefer* any other? Until we get confirmation either way, I'm going to work under the assumption that there are actual network barriers to using IRC for these contributors and that it's not just a matter of preference. I mainly want to know the source of these barriers because that will determine how to go about addressing them. If it's restrictions imposed by employers, it may be hard for employees to raise the issue in predominantly confrontation-averse cultures. The First Contact SIG is working on a document which outlines the communications and workflows used by our community with a focus on explaining to managers and other staff at contributing organizations what allowances they can make to ease and improve the experience of those they've tasked with working upstream. If the barriers are instead imposed by national government, then urging contributors within those borders to flaunt the law and interact with the rest of our community over IRC is not something which should be taken lightly. That's not to say it can't be solved, but the topic then is a much more political one and our community may not be an appropriate venue for those discussions. > In the past, we made clear for a couple of times why IRC is our > communication channel. I don't see those reasons to be invalid > now, but I'm still open to understand the problems about why our > community becomes de facto fragmented. I think the extended community is already fragmented across a variety of discussion fora. Some watch for relevant hashtags on Twitter and engage in discussions there. I gather there's an unofficial OpenStack Slack channel where lots of newcomers show up to ask questions because they assume the OpenStack community relies on Slack the same way the Kubernetes community does, and so a few volunteers from our community hang out there and try to redirect questions to more appropriate places. I've also heard tell of an OpenStack subReddit which some stackers help moderate and try to provide damage control/correct misstatements there. I don't think these are necessarily a problem, and the members of our community who work to spread accurate information to these places are in many cases helping reduce the actual degree of fragmentation. I'm still trying to make up my mind on 602697 which is why I haven't weighed in on the proposal yet. So far I feel like it probably doesn't bring anything new, since we already declare how and where official discussion takes place and the measure doesn't make any attempt to change that. We also don't regulate where unofficial discussions are allowed to take place, and so it doesn't open up any new possibilities which were previously disallowed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From thierry at openstack.org Tue Sep 18 13:59:51 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 18 Sep 2018 15:59:51 +0200 Subject: [openstack-dev] [Openstack-sigs] [Openstack-operators] [tc]Global Reachout Proposal In-Reply-To: References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> <20180918124049.jw7xbufikxfx3w37@yuggoth.org> Message-ID: <204f0abc-6391-3001-deae-b14a8de6710f@openstack.org> Sylvain Bauza wrote: > > > Le mar. 18 sept. 2018 à 14:41, Jeremy Stanley > a écrit : > > On 2018-09-18 11:26:57 +0900 (+0900), Ghanshyam Mann wrote: > [...] > > I can understand that IRC cannot be used in China which is very > > painful and mostly it is used weChat. > [...] > > I have yet to hear anyone provide first-hand confirmation that > access to Freenode's IRC servers is explicitly blocked by the > mainland Chinese government. There has been a lot of speculation > that the usual draconian corporate firewall policies (surprise, the > rest of the World gets to struggle with those too, it's not just a > problem in China) are blocking a variety of messaging protocols from > workplace networks and the people who encounter this can't tell the > difference because they're already accustomed to much of their other > communications being blocked at the border. I too have heard from > someone who's heard from someone that "IRC can't be used in China" > but the concrete reasons why continue to be missing from these > discussions. > > Thanks fungi, that's the crux of the problem I'd like to see discussed > in the governance change. > In this change, it states the non-use of existing and official > communication tools as to be "cumbersome". See my comment on PS1, I > thought the original concern was technical. > > Why are we discussing about WeChat now ? Is that because a large set of > our contributors *can't* access IRC or because they *prefer* any other ? > In the past, we made clear for a couple of times why IRC is our > communication channel. I don't see those reasons to be invalid now, but > I'm still open to understand the problems about why our community > becomes de facto fragmented. Agreed, I'm still trying to grasp the issue we are trying to solve here. We really need to differentiate between technical blockers (firewall), cultural blockers (language) and network effect preferences (preferred platform). We should definitely try to address technical blockers, as we don't want to exclude anyone. We can also allow for a bit of flexibility in the tools used in our community, to accommodate cultural blockers as much as we possibly can (keeping in mind that in the end, the code has to be written, proposed and discussed in a single language). We can even encourage community members to reach out on local social networks... But I'm reluctant to pass an official resolution to recommend that TC members engage on specific platforms because "everyone is there". -- Thierry Carrez (ttx) From zbitter at redhat.com Tue Sep 18 14:36:36 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 18 Sep 2018 10:36:36 -0400 Subject: [openstack-dev] [tc] notes from stein ptg meetings of the technical committee In-Reply-To: <019dad0e-8631-f6fe-e786-577f79f8edc6@gmail.com> References: <1537204771-sup-1590@lrrr.local> <5030a1e4-23cd-3cc9-1f89-e895efc7eb5b@gmail.com> <59f39dda-5113-7e8d-7402-8b1711d25f66@redhat.com> <1537216461-sup-8994@lrrr.local> <019dad0e-8631-f6fe-e786-577f79f8edc6@gmail.com> Message-ID: On 17/09/18 5:07 PM, Jay Pipes wrote: > Also, for the record, I actually wasn't referring to Adjutant > specifically when I referred in my original post to "only tangentially > related to cloud computing". I was referring to my recollection of > fairly recent history. I remember the seemingly endless debates about > whether some applicants "fit" the OpenStack ecosystem or whether the > applicant was merely trying to jump on a hype bandwagon for marketing > purposes. Again, I wasn't specifically referring to Adjutant here, so I > apologize if my words came across that way. Thanks for the clarification. What you're referring to is also an acknowledged problem, which we discussed at the Forum and are attempting to address with the Technical Vision (which we need to find a better name for). We didn't really discuss that on the Sunday though, because it was a topic on the formal agenda for Friday. Sunday's discussion was purely a retrospective on the Adjutant application, so you should read Doug's summary in that context. cheers, Zane. From jungleboyj at gmail.com Tue Sep 18 14:51:19 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 18 Sep 2018 09:51:19 -0500 Subject: [openstack-dev] [Openstack-sigs] [Openstack-operators] [tc]Global Reachout Proposal In-Reply-To: <20180918124049.jw7xbufikxfx3w37@yuggoth.org> References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> <20180918124049.jw7xbufikxfx3w37@yuggoth.org> Message-ID: <8492eb63-0bd4-6102-67f3-de18f6fe035e@gmail.com> On 9/18/2018 7:40 AM, Jeremy Stanley wrote: > On 2018-09-18 11:26:57 +0900 (+0900), Ghanshyam Mann wrote: > [...] >> I can understand that IRC cannot be used in China which is very >> painful and mostly it is used weChat. > [...] > > I have yet to hear anyone provide first-hand confirmation that > access to Freenode's IRC servers is explicitly blocked by the > mainland Chinese government. There has been a lot of speculation > that the usual draconian corporate firewall policies (surprise, the > rest of the World gets to struggle with those too, it's not just a > problem in China) are blocking a variety of messaging protocols from > workplace networks and the people who encounter this can't tell the > difference because they're already accustomed to much of their other > communications being blocked at the border. I too have heard from > someone who's heard from someone that "IRC can't be used in China" > but the concrete reasons why continue to be missing from these > discussions. > I have team members in Shanghai who are able to access IRC.  I will double check, but I am not aware of them doing anything to work around the national firewall.  So, I am guessing that we are dealing with the usual corporate firewall issues. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Tue Sep 18 14:52:25 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 18 Sep 2018 09:52:25 -0500 Subject: [openstack-dev] [Openstack-sigs] [Openstack-operators] [tc]Global Reachout Proposal In-Reply-To: <20180918124049.jw7xbufikxfx3w37@yuggoth.org> References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> <20180918124049.jw7xbufikxfx3w37@yuggoth.org> Message-ID: On Tue, Sep 18, 2018 at 7:40 AM, Jeremy Stanley wrote: > I have yet to hear anyone provide first-hand confirmation that > access to Freenode's IRC servers is explicitly blocked by the > mainland Chinese government. There has been a lot of speculation [...] Data point: I have a couple of reports from some of our StarlingX contributors that access to Freenode IRC works from our (Intel) sites but not from home. I am not going to speculate as to the cause, it clearly is not open access, but also not totally closed off. dt -- Dean Troyer dtroyer at gmail.com From cdent+os at anticdent.org Tue Sep 18 14:57:12 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 18 Sep 2018 15:57:12 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-38 Message-ID: HTML: https://anticdent.org/tc-report-18-38.html Rather than writing a TC Report this week, I've written a report on the [OpenStack Stein PTG](https://anticdent.org/openstack-stein-ptg.html). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Tue Sep 18 15:17:02 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 18 Sep 2018 09:17:02 -0600 Subject: [openstack-dev] [Openstack-sigs] [tc][uc]Community Wide Long Term Goals In-Reply-To: References: Message-ID: <1537283760-sup-6582@lrrr.local> Excerpts from Zhipeng Huang's message of 2018-09-14 18:51:40 -0600: > Hi, > > Based upon the discussion we had at the TC session in the afternoon, I'm > starting to draft a patch to add long term goal mechanism into governance. > It is by no means a complete solution at the moment (still have not thought > through the execution method yet to make sure the outcome), but feel free > to provide your feedback at https://review.openstack.org/#/c/602799/ . > > -- > Zhipeng (Howard) Huang [I commented on the patch, but I'll also reply here for anyone not following the review.] I'm glad to see the increased interest in goals. Before we change the existing process, though, I would prefer to see engagement with the current process. We can start by having SIGs and WGs update the etherpad where we track goal proposals (https://etherpad.openstack.org/p/community-goals) and then we can see if we actually need to manage goals across multiple release cycles as a single unit. Doug From samuel at cassi.ba Tue Sep 18 15:20:28 2018 From: samuel at cassi.ba (Samuel Cassiba) Date: Tue, 18 Sep 2018 08:20:28 -0700 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: <20180918123249.2hxgyqjevcgpbav4@yuggoth.org> References: <20180917133211.gxcr5egf3r4rqsvf@yuggoth.org> <20180918000635.li7futcz5tyc43k2@yuggoth.org> <20180918123249.2hxgyqjevcgpbav4@yuggoth.org> Message-ID: On Tue, Sep 18, 2018 at 5:34 AM Jeremy Stanley wrote: > > On 2018-09-18 10:23:33 +0800 (+0800), Zhipeng Huang wrote: > [...] > > Jeremy, what I'm saying here, and also addressed in comments with > > the related resolution patch, is that personality reasons are the > > ones that we have to respect and no form of governance change > > could help solve the problem. However other than that, we could > > always find a way to address the issue for remedies, if we don't > > have a good answer now maybe we will have sometime later. > > > > Preference on social tooling is something that the technical > > committee is able to address, with isolation of usage of > > proprietary tools for certain scenario and also strict policy on > > enforcing the open source communication solutions we have today as > > the central ones the community will continue to use. This is not > > an unsolvable problem given that we have a technical committee, > > but personality issues are, no matter what governance instrument > > we have. > > Once again, I think we're talking past each other. I was replying to > (and quoted from) the provided sample rejection letter. First I > wanted to point out that I had already rejected the premise earlier > on this thread even though it was suggested that no rejection had > yet been provided. Second, the sample letter seemed to indicate what > I believe to be a fundamental misunderstanding among those pushing > this issue: the repeated attempts I've seen so far to paint a > disinterest in participating in wechat interactions as mere > "personal preference," and the idea that those who hold this > "preference" are somehow weak or afraid of the people they'll > encounter there. > > For me, it borders on insulting. I (and I believe many others) have > strong ideological opposition to participating in these forums, not > mere personal preferences. > -- > Jeremy Stanley > It is incredibly difficult to convey intent over primarily text-based mediums, of which I primarily interact with individuals I've never seen in-person. What is my ideological principle, is someone's personal preference, isn't even a thought to yet another. I work within other FLOSS projects outside of OpenStack. With some, my primary interactions take place over Slack, because they made the conscious choice to hoist their user community to a free instance, nominating people to an ambassador role for keeping their message intact on IRC. Other times, it's over GitHub, where the whole interaction takes place within the one platform. Within OpenStack, some people I've only ever worked with through code reviews or bug reports. Others, IRC or email. People are going to gravitate toward what makes sense for them, but that's where the lines between ideology and preference blur. Agreeing to keep the important lines of communication to a certain medium is the preference here, but it's also the ideological belief. The debates ongoing are not Wechat versus Twitter versus IRC versus Slack. It's over keeping the intent of being open, which is defined in the very namesake. Many moons ago, Chef OpenStack was advised to actively eschew video meetings before being approved to being an OpenStack project under the Big Tent experiment during the rise of the hype. This happened, despite the active actions for openness and inclusiveness into the weekly video meetings, because there was no text record to reference. This, in turn, resulted in fewer and fewer developers being able to justify having an hour a week to 'mess around' on IRC, and thus the hastening of the deflationary period. With a video running, it was more reasonable to being able to justify an hour to a conference room or an office to further the intent of openness in the community. I directly see the benefit in having a means to reach the greater community (hi! o/) but I do not directly see the correlation in defining a given social platform as being The Platform for Relevant Communications beyond email or code review. Email and code review are, by far, the most accessible points around the globe. For the Horde^Wcode, Samuel Cassiba (scas) From doug at doughellmann.com Tue Sep 18 15:24:32 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 18 Sep 2018 09:24:32 -0600 Subject: [openstack-dev] [goals][python3] week 6 update Message-ID: <1537283938-sup-3925@lrrr.local> This is the 6th week of the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). == What we learned last week == I spoke with a few teams at the PTG to clarify the process and the "definition of done" for the goal. If you have similar questions, please get in touch. There are still a lot of patches failing jobs. == Ongoing and Completed Work == I have updated the status report output to separate the zuul migration, documentation, and unit test changes. In the chart below + means completed, - means not needed, and x/y means x open patches out of y total (except in the bottom line where the numbers are counting teams). I am not yet tracking any work on functional tests. +---------------------+---------+--------------+---------+----------+---------+-------+--------------------+ | Team | zuul | tox defaults | Docs | 3.6 unit | Failing | Total | Champion | +---------------------+---------+--------------+---------+----------+---------+-------+--------------------+ | adjutant | + | - | - | + | 0 | 5 | Doug Hellmann | | barbican | 11/ 13 | + | 1/ 3 | + | 6 | 20 | Doug Hellmann | | blazar | 2/ 16 | + | + | + | 0 | 25 | Nguyen Hai | | Chef OpenStack | + | - | - | - | 0 | 1 | Doug Hellmann | | cinder | 1/ 22 | + | + | + | 1 | 31 | Doug Hellmann | | cloudkitty | + | + | + | + | 0 | 24 | Doug Hellmann | | congress | + | + | + | + | 0 | 24 | Nguyen Hai | | cyborg | + | + | + | + | 0 | 16 | Nguyen Hai | | designate | + | + | + | + | 0 | 24 | Nguyen Hai | | Documentation | + | + | + | + | 0 | 22 | Doug Hellmann | | dragonflow | + | - | + | + | 0 | 6 | Nguyen Hai | | ec2-api | + | - | + | + | 0 | 12 | | | freezer | 3/ 23 | + | + | 2/ 4 | 2 | 33 | | | glance | 1/ 16 | 1/ 4 | 1/ 3 | 1/ 3 | 2 | 26 | Nguyen Hai | | heat | 5/ 27 | 1/ 5 | 1/ 6 | 1/ 7 | 3 | 45 | Doug Hellmann | | horizon | + | + | + | + | 0 | 11 | Nguyen Hai | | I18n | + | - | - | - | 0 | 2 | Doug Hellmann | | InteropWG | + | - | + | 1/ 3 | 0 | 10 | Doug Hellmann | | ironic | 14/ 60 | + | 2/ 13 | 2/ 12 | 4 | 90 | Doug Hellmann | | karbor | 15/ 15 | + | 2/ 2 | 3/ 3 | 0 | 22 | Nguyen Hai | | keystone | + | + | + | + | 0 | 47 | Doug Hellmann | | kolla | + | - | + | + | 0 | 12 | | | kuryr | 5/ 16 | + | 2/ 6 | 2/ 5 | 8 | 28 | Doug Hellmann | | magnum | + | + | + | + | 0 | 24 | | | manila | 3/ 19 | + | + | + | 3 | 28 | Goutham Pacha Ravi | | masakari | + | + | + | - | 0 | 21 | Nguyen Hai | | mistral | + | + | + | + | 0 | 37 | Nguyen Hai | | monasca | 3/ 66 | 1/ 7 | + | + | 4 | 90 | Doug Hellmann | | murano | + | + | + | + | 0 | 37 | | | neutron | 30/ 73 | + | 2/ 14 | 3/ 13 | 16 | 106 | Doug Hellmann | | nova | 14/ 23 | + | 1/ 5 | 1/ 5 | 0 | 37 | | | octavia | + | + | + | + | 0 | 34 | Nguyen Hai | | OpenStack Charms | 17/117 | - | - | - | 14 | 117 | Doug Hellmann | | OpenStack-Helm | + | - | + | - | 0 | 4 | | | OpenStackAnsible | 31/270 | 2/ 32 | 9/ 65 | - | 23 | 367 | | | OpenStackClient | + | + | + | + | 0 | 25 | | | OpenStackSDK | 12/ 15 | + | 2/ 4 | 1/ 3 | 5 | 25 | | | oslo | + | + | + | + | 0 | 219 | Doug Hellmann | | Packaging-rpm | + | - | + | + | 0 | 7 | Doug Hellmann | | PowerVMStackers | + | - | - | + | 0 | 18 | Doug Hellmann | | Puppet OpenStack | + | - | + | - | 0 | 236 | Doug Hellmann | | qinling | + | + | + | + | 0 | 12 | | | Quality Assurance | + | + | + | + | 0 | 50 | Doug Hellmann | | rally | + | + | + | - | 0 | 5 | Nguyen Hai | | Release Management | + | - | - | + | 0 | 2 | Doug Hellmann | | requirements | + | - | + | + | 0 | 7 | Doug Hellmann | | sahara | + | + | + | + | 0 | 39 | Doug Hellmann | | searchlight | + | + | + | + | 0 | 21 | Nguyen Hai | | senlin | + | + | + | + | 0 | 23 | Nguyen Hai | | SIGs | + | - | + | + | 0 | 9 | Doug Hellmann | | solum | + | + | + | + | 0 | 23 | Nguyen Hai | | storlets | + | + | + | + | 0 | 8 | | | swift | + | 2/ 2 | + | + | 0 | 16 | Nguyen Hai | | tacker | + | 1/ 2 | + | + | 1 | 23 | Nguyen Hai | | Technical Committee | + | - | - | + | 0 | 7 | Doug Hellmann | | Telemetry | 15/ 31 | + | 2/ 6 | 2/ 6 | 6 | 49 | Doug Hellmann | | tricircle | 1/ 9 | + | + | + | 1 | 14 | Nguyen Hai | | tripleo | 55/133 | + | 4/ 20 | 3/ 20 | 30 | 178 | Doug Hellmann | | trove | 16/ 17 | 1/ 2 | 2/ 3 | 2/ 3 | 0 | 25 | Doug Hellmann | | User Committee | + | - | 1/ 2 | - | 0 | 6 | Doug Hellmann | | vitrage | + | + | + | + | 0 | 25 | Nguyen Hai | | watcher | + | 1/ 5 | + | + | 0 | 27 | Nguyen Hai | | winstackers | + | + | + | + | 0 | 17 | | | zaqar | + | + | + | + | 0 | 24 | | | zun | + | + | + | + | 0 | 21 | Nguyen Hai | | | 45/ 65 | 40/ 48 | 44/ 58 | 43/ 56 | 129 | 2607 | | +---------------------+---------+--------------+---------+----------+---------+-------+--------------------+ == Next Steps == All teams should be working to approve the patches proposed by the goal champions, and then to expand functional test coverage for python 3 and document their status in the wiki. == How can you help? == 1. Choose a patch that has failing tests and help fix it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) 2. Review the patches for the zuul changes. Keep in mind that some of those patches will be on the stable branches for projects. 3. Work on adding functional test jobs that run under Python 3. == How can you ask for help? == If you have any questions, please post them here to the openstack-dev list with the topic tag [python3] in the subject line. Posting questions to the mailing list will give the widest audience the chance to see the answers. We are using the #openstack-dev IRC channel for discussion as well, but I'm not sure how good our timezone coverage is so it's probably better to use the mailing list. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 From doug at doughellmann.com Tue Sep 18 15:32:10 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 18 Sep 2018 09:32:10 -0600 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: <1537284599-sup-5054@lrrr.local> Excerpts from Zhipeng Huang's message of 2018-09-14 13:52:50 -0600: > This is a joint question from mnaser and me :) > > For the candidates who are running for tc seats, please reply to this email > to indicate if you are open to use certain social media app in certain > region (like Wechat in China, Line in Japan, etc.), in order to reach out > to the OpenStack developers in that region and help them to connect to the > upstream community as well as answering questions or other activities that > will help. (sorry for the long sentence ... ) > > Rico and I already sign up for Wechat communication for sure :) > > -- > Zhipeng (Howard) Huang [I replied on the governance resolution https://review.openstack.org/#/c/602697/, but I will copy my reply here since not everyone is following the comments there.] While I support the end result this resolution is trying to achieve, I don't think a TC resolution is the right way to go about it. The existing governance resolutions are primarily used to document decisions where there was a disagreement or solutions to issues that need to be formally addressed. I don't think either criteria applies in this case. No member of our community needs permission to use social media, so we don't need to document an opinion about that. Formally encouraging members of our community, whether they are on the TC or not, to reach out to other members also feels unnecessary. We proudly declare our community to be open to anyone who wants to participate in https://governance.openstack.org/tc/reference/opens.html and many members of our community do engage in the sort of outreach that is described in this resolution. If there is a particular area of the community not being heard, then we should work to understand why that is happening. If the cause is really that too few of the TC members are signed up for the right social media tools, then I think we have a much more fundamental issue to address in the expectations of the community and how the TC communicates as a group. I believe a better approach to achieve the outcome this resolution is trying to implement would be to organize some of the folks who have already indicated that they are willing to help with outreach and to work together to bring anyone who wants to participate in the community into the existing communication channels (especially the mailing list), so that we can all collaborate there together. There was support for an approach like that in the room during the meeting last week. Doug From zigo at debian.org Tue Sep 18 16:01:22 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 18 Sep 2018 18:01:22 +0200 Subject: [openstack-dev] [election][tc]Question for candidates about global reachout In-Reply-To: References: Message-ID: On 09/14/2018 09:52 PM, Zhipeng Huang wrote: > This is a joint question from mnaser and me :) > > For the candidates who are running for tc seats, please reply to this > email to indicate if you are open to use certain social media app in > certain region (like Wechat in China Even if I do use WeChat because of some of my Chinese friends that know only that, I am strongly against using such a proprietary network, especially for open development. It's ok-ish if some Chinese want to create local community in WeChat. It's not if the whole project vouches for this type of networks. Cheers, Thomas Goirand (zigo) From sylvain.bauza at gmail.com Tue Sep 18 16:09:44 2018 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Tue, 18 Sep 2018 18:09:44 +0200 Subject: [openstack-dev] [Openstack-sigs] [Openstack-operators] [tc]Global Reachout Proposal In-Reply-To: <204f0abc-6391-3001-deae-b14a8de6710f@openstack.org> References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> <20180918124049.jw7xbufikxfx3w37@yuggoth.org> <204f0abc-6391-3001-deae-b14a8de6710f@openstack.org> Message-ID: Le mar. 18 sept. 2018 à 16:00, Thierry Carrez a écrit : > Sylvain Bauza wrote: > > > > > > Le mar. 18 sept. 2018 à 14:41, Jeremy Stanley > > a écrit : > > > > On 2018-09-18 11:26:57 +0900 (+0900), Ghanshyam Mann wrote: > > [...] > > > I can understand that IRC cannot be used in China which is very > > > painful and mostly it is used weChat. > > [...] > > > > I have yet to hear anyone provide first-hand confirmation that > > access to Freenode's IRC servers is explicitly blocked by the > > mainland Chinese government. There has been a lot of speculation > > that the usual draconian corporate firewall policies (surprise, the > > rest of the World gets to struggle with those too, it's not just a > > problem in China) are blocking a variety of messaging protocols from > > workplace networks and the people who encounter this can't tell the > > difference because they're already accustomed to much of their other > > communications being blocked at the border. I too have heard from > > someone who's heard from someone that "IRC can't be used in China" > > but the concrete reasons why continue to be missing from these > > discussions. > > > > Thanks fungi, that's the crux of the problem I'd like to see discussed > > in the governance change. > > In this change, it states the non-use of existing and official > > communication tools as to be "cumbersome". See my comment on PS1, I > > thought the original concern was technical. > > > > Why are we discussing about WeChat now ? Is that because a large set of > > our contributors *can't* access IRC or because they *prefer* any other ? > > In the past, we made clear for a couple of times why IRC is our > > communication channel. I don't see those reasons to be invalid now, but > > I'm still open to understand the problems about why our community > > becomes de facto fragmented. > > Agreed, I'm still trying to grasp the issue we are trying to solve here. > > We really need to differentiate between technical blockers (firewall), > cultural blockers (language) and network effect preferences (preferred > platform). > > We should definitely try to address technical blockers, as we don't want > to exclude anyone. We can also allow for a bit of flexibility in the > tools used in our community, to accommodate cultural blockers as much as > we possibly can (keeping in mind that in the end, the code has to be > written, proposed and discussed in a single language). We can even > encourage community members to reach out on local social networks... But > I'm reluctant to pass an official resolution to recommend that TC > members engage on specific platforms because "everyone is there". > > I second your opinion on this. Before voting on a TC resolution, we first need at least to understand the problem. Like I said previously, stating 'cumbersome' in the proposed resolution doesn't imply a technical issue hence me jumping straight on the 3rd possibility you mentioned, which is "by convenience". In that case, the TC should rather reinforce the message that, as a whole community, we try to avoid silos and that contributors should be highly encouraged to stop discussing on other channels but the official ones. Having the First Contact SIG be the first line for helping those people to migrate to IRC (by helping them understand how it works, how to play with, which kind of setup is preferrable (bouncers)) seems a great idea. -Sylvain -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgiusti at gmail.com Tue Sep 18 16:15:41 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Tue, 18 Sep 2018 12:15:41 -0400 Subject: [openstack-dev] [oslo][tacker][daisycloud-core][meteos] Removal of rpc_backend config opt from oslo.messaging Message-ID: Thanks to work done by Steve Kowalik we're ready to remove the old rpc_backend transport configuration option that has been deprecated since mid 2016. This removal involves changes to the oslo.messaging.ConfFixture as well. Steve has provided patches to those projects affected by these changes Almost all projects have merged these patches. There are a few projects - included in the subject line - where the necessary patches have not yet landed. If you're a committer on one of these projects please make an effort to review the patches proposed for your project: https://review.openstack.org/#/q/topic:bug/1712399+status:open Our goal is to land the removal next week. thanks -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim at swiftstack.com Tue Sep 18 16:35:31 2018 From: tim at swiftstack.com (Tim Burke) Date: Tue, 18 Sep 2018 09:35:31 -0700 Subject: [openstack-dev] GUI for Swift object storage In-Reply-To: References: Message-ID: Hate to nitpick, but Cyberduck is licensed GPLv3 -- you can browse the source (and confirm the license) at https://trac.cyberduck.io/browser/trunk and https://trac.cyberduck.io/ indicates the source is available via git or svn. They do nag you to donate, though. Swift explorer is Apache 2, available at https://github.com/roikku/swift-explorer . I don't know anything about Gladinet. Tim > On Sep 17, 2018, at 7:31 PM, M Ranga Swami Reddy wrote: > > All GUI tools are non open source...need to pay like cyberduck etc. > Looking for open source GUI for Swift API access. > > On Tue, 18 Sep 2018, 06:41 John Dickinson, > wrote: > That's a great question. > > A quick google search shows a few like Swift Explorer, Cyberduck, and Gladinet. But since Swift supports the S3 API (check with your cluster operator to see if this is enabled, or examine the results of a GET /info request), you can use any available S3 GUI client as well (as long as you can configure the endpoints you connect to). > > --John > > On 17 Sep 2018, at 16:48, M Ranga Swami Reddy wrote: > > Hi - is there any GUI (open source) available for Swift objects storage? > > Thanks > Swa > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate.johnston at redhat.com Tue Sep 18 16:46:54 2018 From: nate.johnston at redhat.com (Nate Johnston) Date: Tue, 18 Sep 2018 12:46:54 -0400 Subject: [openstack-dev] [python3] tempest and grenade conversion to python 3.6 Message-ID: <20180918164654.zld7rzxessmpogns@bishop> Hello python 3.6 champions, I have looked around a little, and I don't see a method for me to specifically select the version of python that the tempest and grenade jobs for my project (neutron) are using. I assume one of four things is at play here: A. These projects already shifted to python 3 and I don't have to worry about it B. There is a toggle for the python version I just have not seen yet C. These projects are still on python 2 and need help to do a conversion to python 3, which would affect all customers D. Something else that I have failed to imagine Could you elaborate which of these options properly reflects the state of affairs? If the answer is "C" then perhaps we can start a discussion on that migration. Thanks! Nate Johnston From duc.openstack at gmail.com Tue Sep 18 16:50:58 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Tue, 18 Sep 2018 09:50:58 -0700 Subject: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration In-Reply-To: References: Message-ID: Hi Rico, I'm the Senlin PTL and would be happy to have a forum discussion in Berlin about the future of autoscaling. Can you go ahead and start an etherpad to capture the proposed agenda and discussion items? Also, feel free to submit the forum submission so that we can get it on the schedule. Thanks, Duc (dtruong) On Mon, Sep 17, 2018 at 8:28 PM Rico Lin wrote: > *TL;DR* > *How about a forum in Berlin for discussing autoscaling integration (as a > long-term goal) in OpenStack?* > > > Hi all, as we start to discuss how can we join develop from Heat and > Senlin as we originally planned when we decided to fork Senlin from Heat > long time ago. > > IMO the biggest issues we got now are we got users using autoscaling in > both services, appears there is a lot of duplicated effort, and some great > enhancement didn't exist in another service. > As a long-term goal (from the beginning), we should try to join > development to sync functionality, and move users to use Senlin for > autoscaling. So we should start to review this goal, or at least we should > try to discuss how can we help users without break or enforce anything. > > What will be great if we can build common library cross projects, and use > that common library in both projects, make sure we have all improvement > implemented in that library, finally to use Senlin from that from that > library call in Heat autoscaling group. And in long-term, we gonna let all > user use more general way instead of multiple ways but generate huge > confusing for users. > > *As an action, I propose we have a forum in Berlin and sync up all effort > from both teams to plan for idea scenario design. The forum submission [1] > ended at 9/26.* > Also would benefit from both teams to start to think about how they can > modulize those functionalities for easier integration in the future. > > From some Heat PTG sessions, we keep bring out ideas on how can we improve > current solutions for Autoscaling. We should start to talk about will it > make sense if we combine all group resources into one, and inherit from it > for other resources (ideally gonna deprecate rest resource types). Like we > can do Batch create/delete in Resource Group, but not in ASG. We definitely > got some unsynchronized works inner Heat, and cross Heat and Senlin. > > Please let me know who is interesting in this idea, so we can work > together and reach our goal step by step. > Also please provide though if you got any concerns about this proposal. > > [1] https://www.openstack.org/summit/berlin-2018/call-for-presentations > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Sep 18 16:53:45 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 18 Sep 2018 09:53:45 -0700 Subject: [openstack-dev] [python3] tempest and grenade conversion to python 3.6 In-Reply-To: <20180918164654.zld7rzxessmpogns@bishop> References: <20180918164654.zld7rzxessmpogns@bishop> Message-ID: <1537289625.1170266.1512393384.53855DF8@webmail.messagingengine.com> On Tue, Sep 18, 2018, at 9:46 AM, Nate Johnston wrote: > Hello python 3.6 champions, > > I have looked around a little, and I don't see a method for me to > specifically select the version of python that the tempest and grenade > jobs for my project (neutron) are using. I assume one of four things > is at play here: > > A. These projects already shifted to python 3 and I don't have to worry > about it > > B. There is a toggle for the python version I just have not seen yet > > C. These projects are still on python 2 and need help to do a conversion > to python 3, which would affect all customers > > D. Something else that I have failed to imagine > > Could you elaborate which of these options properly reflects the state > of affairs? If the answer is "C" then perhaps we can start a discussion > on that migration. For our devstack and grenade jobs tempest is installed using tox [0]. And since the full testenv in tempest's tox.ini doesn't specify a python version [1] I expect that it will attempt a python2 virtualenv on every platform (Arch linux may be an exception but we don't test that). I think that means C is the situation here. To change that you can set basepython to python3 (see [2] for an example) which will run tempest under whichever python3 is present on the system. The one gotcha for this is that it will break tempest on centos which does not have python3. Maybe the thing to do there is add a full-python2 testenv that centos can run? [0] https://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/tempest#n653 [1] https://git.openstack.org/cgit/openstack/tempest/tree/tox.ini#n74 [2] https://git.openstack.org/cgit/openstack-infra/zuul/tree/tox.ini#n7 Hope this helps, Clark From lbragstad at gmail.com Tue Sep 18 16:56:22 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 18 Sep 2018 11:56:22 -0500 Subject: [openstack-dev] [Openstack-sigs] [tc][uc]Community Wide Long Term Goals In-Reply-To: <1537283760-sup-6582@lrrr.local> References: <1537283760-sup-6582@lrrr.local> Message-ID: On Tue, Sep 18, 2018 at 10:17 AM Doug Hellmann wrote: > Excerpts from Zhipeng Huang's message of 2018-09-14 18:51:40 -0600: > > Hi, > > > > Based upon the discussion we had at the TC session in the afternoon, I'm > > starting to draft a patch to add long term goal mechanism into > governance. > > It is by no means a complete solution at the moment (still have not > thought > > through the execution method yet to make sure the outcome), but feel free > > to provide your feedback at https://review.openstack.org/#/c/602799/ . > > > > -- > > Zhipeng (Howard) Huang > > [I commented on the patch, but I'll also reply here for anyone not > following the review.] > > I'm glad to see the increased interest in goals. Before we change > the existing process, though, I would prefer to see engagement with > the current process. We can start by having SIGs and WGs update the > etherpad where we track goal proposals > (https://etherpad.openstack.org/p/community-goals) and then we can > see if we actually need to manage goals across multiple release > cycles as a single unit. > Depending on the official outcome of this resolution, I was going to try and use the granular RBAC work to test out this process. I can still do that, or I can hold off if appropriate. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hrybacki at redhat.com Tue Sep 18 17:00:37 2018 From: hrybacki at redhat.com (Harry Rybacki) Date: Tue, 18 Sep 2018 13:00:37 -0400 Subject: [openstack-dev] [Openstack-sigs] [tc][uc]Community Wide Long Term Goals In-Reply-To: References: <1537283760-sup-6582@lrrr.local> Message-ID: On Tue, Sep 18, 2018 at 12:57 PM Lance Bragstad wrote: > > > > On Tue, Sep 18, 2018 at 10:17 AM Doug Hellmann wrote: >> >> Excerpts from Zhipeng Huang's message of 2018-09-14 18:51:40 -0600: >> > Hi, >> > >> > Based upon the discussion we had at the TC session in the afternoon, I'm >> > starting to draft a patch to add long term goal mechanism into governance. >> > It is by no means a complete solution at the moment (still have not thought >> > through the execution method yet to make sure the outcome), but feel free >> > to provide your feedback at https://review.openstack.org/#/c/602799/ . >> > >> > -- >> > Zhipeng (Howard) Huang >> >> [I commented on the patch, but I'll also reply here for anyone not >> following the review.] >> >> I'm glad to see the increased interest in goals. Before we change >> the existing process, though, I would prefer to see engagement with >> the current process. We can start by having SIGs and WGs update the >> etherpad where we track goal proposals >> (https://etherpad.openstack.org/p/community-goals) and then we can >> see if we actually need to manage goals across multiple release >> cycles as a single unit. > > > Depending on the official outcome of this resolution, I was going to try and use the granular RBAC work to test out this process. > My thoughts exactly. > I can still do that, or I can hold off if appropriate. Breaking down the remaining work per Doug's suggestion would be good. We've done this a time-or-three before as the target has moved around. It's probably do for another one. /R > >> >> >> Doug >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Sep 18 17:16:55 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 18 Sep 2018 11:16:55 -0600 Subject: [openstack-dev] [Openstack-sigs] [tc][uc]Community Wide Long Term Goals In-Reply-To: References: <1537283760-sup-6582@lrrr.local> Message-ID: <1537290373-sup-9498@lrrr.local> Excerpts from Lance Bragstad's message of 2018-09-18 11:56:22 -0500: > On Tue, Sep 18, 2018 at 10:17 AM Doug Hellmann > wrote: > > > Excerpts from Zhipeng Huang's message of 2018-09-14 18:51:40 -0600: > > > Hi, > > > > > > Based upon the discussion we had at the TC session in the afternoon, I'm > > > starting to draft a patch to add long term goal mechanism into > > governance. > > > It is by no means a complete solution at the moment (still have not > > thought > > > through the execution method yet to make sure the outcome), but feel free > > > to provide your feedback at https://review.openstack.org/#/c/602799/ . > > > > > > -- > > > Zhipeng (Howard) Huang > > > > [I commented on the patch, but I'll also reply here for anyone not > > following the review.] > > > > I'm glad to see the increased interest in goals. Before we change > > the existing process, though, I would prefer to see engagement with > > the current process. We can start by having SIGs and WGs update the > > etherpad where we track goal proposals > > (https://etherpad.openstack.org/p/community-goals) and then we can > > see if we actually need to manage goals across multiple release > > cycles as a single unit. > > > > Depending on the official outcome of this resolution, I was going to try > and use the granular RBAC work to test out this process. > > I can still do that, or I can hold off if appropriate. The Python 3 transition has been going on for 5-6 years now, and started before we had even the current goals process in place. I think it's completely possible for us to do work that takes a long time without making the goals process more complex. Let's try to keep the process lightweight, and make incremental changes to it based on real shortcomings (adding champions is one example of a tweak that made a significant improvement). It may be easy to continue to prioritize a follow-up part of a multi-part goal we have already started, but I would rather we don't *require* that in case we have some other significant work that we have to rally folks to complete (I'm thinking of things like addressing security issues, some new technical challenge that comes up, or other community needs that we don't foresee at the start of a multi-part goal). We designed the current process to encourage those sorts of conversations to happen on a regular basis, after all, so I'm very happy to see interest in using it. But let's try to use what we have before we assume it's broken. I think you could (and should) start by describing the stages you anticipate for the RBAC stuff, and then we can see which parts need to be done before we adopt a goal, which part are goals, and whether enough momentum picks up that we don't need to make later parts formal goals. Doug From mriedemos at gmail.com Tue Sep 18 17:26:30 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 18 Sep 2018 12:26:30 -0500 Subject: [openstack-dev] [nova] When can/should we change additionalProperties=False in GET /servers(/detail)? In-Reply-To: <165ea8d9f10.add97103175992.5456929857422374986@ghanshyammann.com> References: <70abbabe-2480-4c25-0665-a14b2eb5f3ab@gmail.com> <75ef2549-dfba-3267-5e76-0c59c64cd4ac@gmail.com> <165ea8d9f10.add97103175992.5456929857422374986@ghanshyammann.com> Message-ID: <46cee3db-eecb-97f6-a793-c33d57a71ad2@gmail.com> On 9/17/2018 9:41 PM, Ghanshyam Mann wrote: > ---- On Tue, 18 Sep 2018 09:33:30 +0900 Alex Xu wrote ---- > > That only means after 599276 we only have servers API and os-instance-action API stopped accepting the undefined query parameter. > > What I'm thinking about is checking all the APIs, add json-query-param checking with additionalProperties=True if the API don't have yet. And using another microversion set additionalProperties to False, then the whole Nova API become consistent. > > I too vote for doing it for all other API together. Restricting the unknown query or request param are very useful for API consistency. Item#1 in this etherpadhttps://etherpad.openstack.org/p/nova-api-cleanup > > If you would like, i can propose a quick spec for that and positive response to do all together then we skip to do that in 599276 otherwise do it for GET servers in 599276. > > -gmann I don't care too much about changing all of the other additionalProperties=False in a single microversion given we're already kind of inconsistent with this in a few APIs. Consistency is ideal, but I thought we'd be lumping in other cleanups from the etherpad into the same microversion/spec which will likely slow it down during spec review. For example, I'd really like to get rid of the weird server response field prefixes like "OS-EXT-SRV-ATTR:". Would we put those into the same mass cleanup microversion / spec or split them into individual microversions? I'd prefer not to see an explosion of microversions for cleaning up oddities in the API, but I could see how doing them all in a single microversion could be complicated. -- Thanks, Matt From lbragstad at gmail.com Tue Sep 18 17:27:06 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 18 Sep 2018 12:27:06 -0500 Subject: [openstack-dev] [Openstack-sigs] [tc][uc]Community Wide Long Term Goals In-Reply-To: <1537290373-sup-9498@lrrr.local> References: <1537283760-sup-6582@lrrr.local> <1537290373-sup-9498@lrrr.local> Message-ID: On Tue, Sep 18, 2018 at 12:17 PM Doug Hellmann wrote: > Excerpts from Lance Bragstad's message of 2018-09-18 11:56:22 -0500: > > On Tue, Sep 18, 2018 at 10:17 AM Doug Hellmann > > wrote: > > > > > Excerpts from Zhipeng Huang's message of 2018-09-14 18:51:40 -0600: > > > > Hi, > > > > > > > > Based upon the discussion we had at the TC session in the afternoon, > I'm > > > > starting to draft a patch to add long term goal mechanism into > > > governance. > > > > It is by no means a complete solution at the moment (still have not > > > thought > > > > through the execution method yet to make sure the outcome), but feel > free > > > > to provide your feedback at https://review.openstack.org/#/c/602799/ > . > > > > > > > > -- > > > > Zhipeng (Howard) Huang > > > > > > [I commented on the patch, but I'll also reply here for anyone not > > > following the review.] > > > > > > I'm glad to see the increased interest in goals. Before we change > > > the existing process, though, I would prefer to see engagement with > > > the current process. We can start by having SIGs and WGs update the > > > etherpad where we track goal proposals > > > (https://etherpad.openstack.org/p/community-goals) and then we can > > > see if we actually need to manage goals across multiple release > > > cycles as a single unit. > > > > > > > Depending on the official outcome of this resolution, I was going to try > > and use the granular RBAC work to test out this process. > > > > I can still do that, or I can hold off if appropriate. > > The Python 3 transition has been going on for 5-6 years now, and > started before we had even the current goals process in place. I > think it's completely possible for us to do work that takes a long > time without making the goals process more complex. Let's try to > keep the process lightweight, and make incremental changes to it > based on real shortcomings (adding champions is one example of a > tweak that made a significant improvement). > > It may be easy to continue to prioritize a follow-up part of a > multi-part goal we have already started, but I would rather we don't > *require* that in case we have some other significant work that we > have to rally folks to complete (I'm thinking of things like > addressing security issues, some new technical challenge that comes > up, or other community needs that we don't foresee at the start of > a multi-part goal). We designed the current process to encourage > those sorts of conversations to happen on a regular basis, after > all, so I'm very happy to see interest in using it. But let's try > to use what we have before we assume it's broken. > That's fair. > > I think you could (and should) start by describing the stages you > anticipate for the RBAC stuff, and then we can see which parts need > to be done before we adopt a goal, which part are goals, and whether > enough momentum picks up that we don't need to make later parts > formal goals. > Do you have a particular medium in mind? > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Sep 18 17:28:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 18 Sep 2018 11:28:29 -0600 Subject: [openstack-dev] [python3] tempest and grenade conversion to python 3.6 In-Reply-To: <1537289625.1170266.1512393384.53855DF8@webmail.messagingengine.com> References: <20180918164654.zld7rzxessmpogns@bishop> <1537289625.1170266.1512393384.53855DF8@webmail.messagingengine.com> Message-ID: <1537291156-sup-8745@lrrr.local> Excerpts from Clark Boylan's message of 2018-09-18 09:53:45 -0700: > On Tue, Sep 18, 2018, at 9:46 AM, Nate Johnston wrote: > > Hello python 3.6 champions, > > > > I have looked around a little, and I don't see a method for me to > > specifically select the version of python that the tempest and grenade > > jobs for my project (neutron) are using. I assume one of four things > > is at play here: > > > > A. These projects already shifted to python 3 and I don't have to worry > > about it > > > > B. There is a toggle for the python version I just have not seen yet > > > > C. These projects are still on python 2 and need help to do a conversion > > to python 3, which would affect all customers > > > > D. Something else that I have failed to imagine > > > > Could you elaborate which of these options properly reflects the state > > of affairs? If the answer is "C" then perhaps we can start a discussion > > on that migration. > > For our devstack and grenade jobs tempest is installed using tox [0]. And since the full testenv in tempest's tox.ini doesn't specify a python version [1] I expect that it will attempt a python2 virtualenv on every platform (Arch linux may be an exception but we don't test that). > > I think that means C is the situation here. To change that you can set basepython to python3 (see [2] for an example) which will run tempest under whichever python3 is present on the system. The one gotcha for this is that it will break tempest on centos which does not have python3. Maybe the thing to do there is add a full-python2 testenv that centos can run? > > [0] https://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/tempest#n653 > [1] https://git.openstack.org/cgit/openstack/tempest/tree/tox.ini#n74 > [2] https://git.openstack.org/cgit/openstack-infra/zuul/tree/tox.ini#n7 > > Hope this helps, > Clark > While having tempest run under python 3 would be great, I'm not sure that's necessary in order to test a service. Don't those jobs use devstack to install the system being tested? And devstack uses some environment variables to control the version of python. For example the tempest-full-py3 job [1] defines USE_PYTHON3 as 'true'. What's probably missing is a version of the grenade job that allows us to control that USE_PYTHON3 variable before and after the upgrade. I see a few different grenade jobs (neutron-grenade, neutron-grenade-multinode, legacy-grenade-dsvm-neutron-multinode-live-migration, possibly others). Which ones are "current" and would make a good candidate as a base for a new job? Doug [1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n70 From lyarwood at redhat.com Tue Sep 18 17:33:10 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 18 Sep 2018 18:33:10 +0100 Subject: [openstack-dev] [puppet] [placement] In-Reply-To: References: <20180917092815.zjs5gbewqm2lytjp@lyarwood.usersys.redhat.com> Message-ID: <20180918173310.572g2oy27k6v4tji@lyarwood.usersys.redhat.com> On 17-09-18 08:48:01, Emilien Macchi wrote: > On Mon, Sep 17, 2018 at 5:29 AM Lee Yarwood wrote: > > > FWIW I've also started work on the RDO packaging front [1] and would be > > happy to help with this puppet extraction. > > > > Good to know, thanks. > Once we have the repo in place, here is a plan proposal: > > * Populate the repo with cookiecutter & adjust to Placement service > * cp code from nova::placement (and nova::wsgi::apache_placement) > * package placement and puppet-placement in RDO > * start testing puppet-placement in puppet-openstack-integration > * switch tripleo-common / THT to deploy placement in nova_placement > container > * switch tripleo to use puppet-placement (in THT) > * probably rename nova_placement container/service into placement or > something generic > > Feedback is welcome, Thanks Emilien, The only thing I'd add would be TripleO/THT powered upgrades, after switching to puppet-placement. We discussed this in both the Nova and Upgrades SIG rooms and the end goal was to have TripleO able to extract placement during an upgrade to S by M2. I appreciate this is an optimistic goal for upgrades but I think it's just about possible given the extended cycle. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From doug at doughellmann.com Tue Sep 18 17:51:45 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 18 Sep 2018 11:51:45 -0600 Subject: [openstack-dev] [Openstack-sigs] [tc][uc]Community Wide Long Term Goals In-Reply-To: References: <1537283760-sup-6582@lrrr.local> <1537290373-sup-9498@lrrr.local> Message-ID: <1537293028-sup-2612@lrrr.local> Excerpts from Lance Bragstad's message of 2018-09-18 12:27:06 -0500: > On Tue, Sep 18, 2018 at 12:17 PM Doug Hellmann > wrote: > > > Excerpts from Lance Bragstad's message of 2018-09-18 11:56:22 -0500: > > > On Tue, Sep 18, 2018 at 10:17 AM Doug Hellmann > > > wrote: > > > > > > > Excerpts from Zhipeng Huang's message of 2018-09-14 18:51:40 -0600: > > > > > Hi, > > > > > > > > > > Based upon the discussion we had at the TC session in the afternoon, > > I'm > > > > > starting to draft a patch to add long term goal mechanism into > > > > governance. > > > > > It is by no means a complete solution at the moment (still have not > > > > thought > > > > > through the execution method yet to make sure the outcome), but feel > > free > > > > > to provide your feedback at https://review.openstack.org/#/c/602799/ > > . > > > > > > > > > > -- > > > > > Zhipeng (Howard) Huang > > > > > > > > [I commented on the patch, but I'll also reply here for anyone not > > > > following the review.] > > > > > > > > I'm glad to see the increased interest in goals. Before we change > > > > the existing process, though, I would prefer to see engagement with > > > > the current process. We can start by having SIGs and WGs update the > > > > etherpad where we track goal proposals > > > > (https://etherpad.openstack.org/p/community-goals) and then we can > > > > see if we actually need to manage goals across multiple release > > > > cycles as a single unit. > > > > > > > > > > Depending on the official outcome of this resolution, I was going to try > > > and use the granular RBAC work to test out this process. > > > > > > I can still do that, or I can hold off if appropriate. > > > > The Python 3 transition has been going on for 5-6 years now, and > > started before we had even the current goals process in place. I > > think it's completely possible for us to do work that takes a long > > time without making the goals process more complex. Let's try to > > keep the process lightweight, and make incremental changes to it > > based on real shortcomings (adding champions is one example of a > > tweak that made a significant improvement). > > > > It may be easy to continue to prioritize a follow-up part of a > > multi-part goal we have already started, but I would rather we don't > > *require* that in case we have some other significant work that we > > have to rally folks to complete (I'm thinking of things like > > addressing security issues, some new technical challenge that comes > > up, or other community needs that we don't foresee at the start of > > a multi-part goal). We designed the current process to encourage > > those sorts of conversations to happen on a regular basis, after > > all, so I'm very happy to see interest in using it. But let's try > > to use what we have before we assume it's broken. > > > > That's fair. > > > > > I think you could (and should) start by describing the stages you > > anticipate for the RBAC stuff, and then we can see which parts need > > to be done before we adopt a goal, which part are goals, and whether > > enough momentum picks up that we don't need to make later parts > > formal goals. > > > > Do you have a particular medium in mind? Not really. As we discussed in IRC, you'll want to balance the need to have something that's easy to edit with the need to have our usual peer review. So some things like tracking the phases or work may be done better with storyboard or an etherpad, and other things like working out significant technical details may work better as documentation or spec reviews. Doug From rico.lin.guanyu at gmail.com Tue Sep 18 18:45:58 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 19 Sep 2018 02:45:58 +0800 Subject: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration In-Reply-To: References: Message-ID: cool Duc, and it's nicely started: https://etherpad.openstack.org/p/autoscaling-integration-and-feedback I also submit the etherpad, will add you as moderator once it's selected (don't know why, but can't add any more now from the web). Please add whatever you like to that etherpad, I will try to input more information ASAP. all information will continue to be used with or without that forum. On Wed, Sep 19, 2018 at 12:51 AM Duc Truong wrote: > Hi Rico, > > I'm the Senlin PTL and would be happy to have a forum discussion in > Berlin about the future of autoscaling. > > Can you go ahead and start an etherpad to capture the proposed agenda > and discussion items? Also, feel free to submit the forum submission > so that we can get it on the schedule. > > Thanks, > > Duc (dtruong) > > On Mon, Sep 17, 2018 at 8:28 PM Rico Lin > wrote: > >> *TL;DR* >> *How about a forum in Berlin for discussing autoscaling integration (as a >> long-term goal) in OpenStack?* >> >> >> Hi all, as we start to discuss how can we join develop from Heat and >> Senlin as we originally planned when we decided to fork Senlin from Heat >> long time ago. >> >> IMO the biggest issues we got now are we got users using autoscaling in >> both services, appears there is a lot of duplicated effort, and some great >> enhancement didn't exist in another service. >> As a long-term goal (from the beginning), we should try to join >> development to sync functionality, and move users to use Senlin for >> autoscaling. So we should start to review this goal, or at least we should >> try to discuss how can we help users without break or enforce anything. >> >> What will be great if we can build common library cross projects, and use >> that common library in both projects, make sure we have all improvement >> implemented in that library, finally to use Senlin from that from that >> library call in Heat autoscaling group. And in long-term, we gonna let all >> user use more general way instead of multiple ways but generate huge >> confusing for users. >> >> *As an action, I propose we have a forum in Berlin and sync up all effort >> from both teams to plan for idea scenario design. The forum submission [1] >> ended at 9/26.* >> Also would benefit from both teams to start to think about how they can >> modulize those functionalities for easier integration in the future. >> >> From some Heat PTG sessions, we keep bring out ideas on how can we >> improve current solutions for Autoscaling. We should start to talk about >> will it make sense if we combine all group resources into one, and inherit >> from it for other resources (ideally gonna deprecate rest resource types). >> Like we can do Batch create/delete in Resource Group, but not in ASG. We >> definitely got some unsynchronized works inner Heat, and cross Heat and >> Senlin. >> >> Please let me know who is interesting in this idea, so we can work >> together and reach our goal step by step. >> Also please provide though if you got any concerns about this proposal. >> >> [1] https://www.openstack.org/summit/berlin-2018/call-for-presentations >> -- >> May The Force of OpenStack Be With You, >> >> *Rico Lin*irc: ricolin >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Tue Sep 18 19:02:16 2018 From: james.slagle at gmail.com (James Slagle) Date: Tue, 18 Sep 2018 15:02:16 -0400 Subject: [openstack-dev] [TripleO] Edge Squad meeting Message-ID: Hi, Thanks to those who responded on the etherpad poll for a meeting time for the TripleO Edge squad: http://lists.openstack.org/pipermail/openstack-dev/2018-August/134069.html We've selected 1400UTC on Thursdays in #tripleo. I've started a rough agenda in the etherpad: https://etherpad.openstack.org/p/tripleo-edge-squad-status Feel free to add other items, and bring them up in the meeting on Thursday. One of the goals of the meeting should be to capture action items we can complete before we meet again the following week. If you have any ideas or would like to collaborate on something, please bring them up. See everyone for the meeting. Thanks! -- -- James Slagle -- From mriedemos at gmail.com Tue Sep 18 19:27:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 18 Sep 2018 14:27:03 -0500 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? Message-ID: The release page says Ocata is planned to go into extended maintenance mode on Aug 27 [1]. There really isn't much to this except it means we don't do releases for Ocata anymore [2]. There is a caveat that project teams that do not wish to maintain stable/ocata after this point can immediately end of life the branch for their project [3]. We can still run CI using tags, e.g. if keystone goes ocata-eol, devstack on stable/ocata can still continue to install from stable/ocata for nova and the ocata-eol tag for keystone. Having said that, if there is no undue burden on the project team keeping the lights on for stable/ocata, I would recommend not tagging the stable/ocata branch end of life at this point. So, questions that need answering are: 1. Should we cut a final release for projects with stable/ocata branches before going into extended maintenance mode? I tend to think "yes" to flush the queue of backports. In fact, [3] doesn't mention it, but the resolution said we'd tag the branch [4] to indicate it has entered the EM phase. 2. Are there any projects that would want to skip EM and go directly to EOL (yes this feels like a Monopoly question)? [1] https://releases.openstack.org/ [2] https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases [3] https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance [4] https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life -- Thanks, Matt From sean.mcginnis at gmx.com Tue Sep 18 19:29:40 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 18 Sep 2018 14:29:40 -0500 Subject: [openstack-dev] [Openstack-sigs] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: Message-ID: <20180918192940.GA10869@sm-workstation> On Tue, Sep 18, 2018 at 02:27:03PM -0500, Matt Riedemann wrote: > The release page says Ocata is planned to go into extended maintenance mode > on Aug 27 [1]. There really isn't much to this except it means we don't do > releases for Ocata anymore [2]. There is a caveat that project teams that do > not wish to maintain stable/ocata after this point can immediately end of > life the branch for their project [3]. We can still run CI using tags, e.g. > if keystone goes ocata-eol, devstack on stable/ocata can still continue to > install from stable/ocata for nova and the ocata-eol tag for keystone. > Having said that, if there is no undue burden on the project team keeping > the lights on for stable/ocata, I would recommend not tagging the > stable/ocata branch end of life at this point. > > So, questions that need answering are: > > 1. Should we cut a final release for projects with stable/ocata branches > before going into extended maintenance mode? I tend to think "yes" to flush > the queue of backports. In fact, [3] doesn't mention it, but the resolution > said we'd tag the branch [4] to indicate it has entered the EM phase. > > 2. Are there any projects that would want to skip EM and go directly to EOL > (yes this feels like a Monopoly question)? > > [1] https://releases.openstack.org/ > [2] https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > [3] https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance > [4] https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life > > -- > > Thanks, > > Matt I have a patch that's been pending for marking it as extended maintenance: https://review.openstack.org/#/c/598164/ That's just the state for Ocata. You raise some other good points here that I am curious to see input on. Sean From aschultz at redhat.com Tue Sep 18 19:30:20 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 18 Sep 2018 13:30:20 -0600 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: Message-ID: On Tue, Sep 18, 2018 at 1:27 PM, Matt Riedemann wrote: > The release page says Ocata is planned to go into extended maintenance mode > on Aug 27 [1]. There really isn't much to this except it means we don't do > releases for Ocata anymore [2]. There is a caveat that project teams that do > not wish to maintain stable/ocata after this point can immediately end of > life the branch for their project [3]. We can still run CI using tags, e.g. > if keystone goes ocata-eol, devstack on stable/ocata can still continue to > install from stable/ocata for nova and the ocata-eol tag for keystone. > Having said that, if there is no undue burden on the project team keeping > the lights on for stable/ocata, I would recommend not tagging the > stable/ocata branch end of life at this point. > > So, questions that need answering are: > > 1. Should we cut a final release for projects with stable/ocata branches > before going into extended maintenance mode? I tend to think "yes" to flush > the queue of backports. In fact, [3] doesn't mention it, but the resolution > said we'd tag the branch [4] to indicate it has entered the EM phase. > > 2. Are there any projects that would want to skip EM and go directly to EOL > (yes this feels like a Monopoly question)? > I believe TripleO would like to EOL instead of EM for Ocata as indicated by the thead http://lists.openstack.org/pipermail/openstack-dev/2018-September/134671.html Thanks, -Alex > [1] https://releases.openstack.org/ > [2] > https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > [3] > https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance > [4] > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mihalis68 at gmail.com Tue Sep 18 19:36:24 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 18 Sep 2018 15:36:24 -0400 Subject: [openstack-dev] Fwd: Denver Ops Meetup post-mortem In-Reply-To: References: Message-ID: ---------- Forwarded message --------- From: Chris Morgan Date: Tue, Sep 18, 2018 at 2:13 PM Subject: Denver Ops Meetup post-mortem To: OpenStack Operators Hello All, Last week we had a successful Ops Meetup embedded in the OpenStack Project Team Gathering in Denver. Despite generally being a useful gathering, there were definitely lessons learned and things to work on, so I thought it would be useful to share a post-mortem. I encourage everyone to share their thoughts on this as well. What went well: - some of the sessions were great and a lot of progress was made - overall attendance in the ops room was good - more developers were able to join the discussions - facilities were generally fine - some operators leveraged being at PTG to have useful involvement in other sessions/discussions such as Keystone, User Committee, Self-Healing SIG, not to mention the usual "hallway conversations", and similarly some project devs were able to bring pressing questions directly to operators. What didn't go so well: - Merging into upgrade SIG didn't go particularly well - fewer ops attended (in particular there were fewer from outside the US) - Some of the proposed sessions were not well vetted - some ops who did attend stated the event identity was diluted, it was less attractive - we tried to adjust the day 2 schedule to include late submissions, however it was probably too late in some cases I don't think it's so important to drill down into all the whys and wherefores of how we fell down here except to say that the ops meetups team is a small bunch of volunteers all with day jobs (presumably just like everyone else on this mailing list). The usual, basically. Much more important : what will be done to improve things going forward: - The User Committee has offered to get involved with the technical content. In particular to bring forward topics from other relevant events into the ops meetup planning process, and then take output from ops meetups forward to subsequent events. We (ops meetup team) have welcomed this. - The Ops Meetups Team will endeavor to start topic selection earlier and have a more critical approach. Having a longer list of possible sessions (when starting with material from earlier events) should make it at least possible to devise a better agenda. Agenda quality drives attendance to some extent and so can ensure a virtuous circle. - We need to work out whether we're doing fixed schedule events (similar to previous mid-cycle Ops Meetups) or fully flexible PTG-style events, but grafting one onto the other ad-hoc clearly is a terrible idea. This needs more discussion. - The Ops Meetups Team continues to explore strange new worlds, or at least get in touch with more and more OpenStack operators to find out what the meetups team and these events could do for them and hence drive the process better. One specific work item here is to help the (widely disparate) operator community with technical issues such as getting setup with the openstack git/gerrit and IRC. The latter is the preferred way for the community to meet, but is particularly difficult now with the registered nickname requirement. We will add help documentation on how to get over this hurdle. - YOUR SUGGESTION HERE Chris -- Chris Morgan -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Tue Sep 18 21:16:47 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 18 Sep 2018 16:16:47 -0500 Subject: [openstack-dev] [ptg][cinder] Stein PTG Summary Page Ready ... Message-ID: <78636766-8656-5110-2526-3cb5a361e06c@gmail.com> Team, I have put together the following page with a summary of all our discussions at the PTG: https://wiki.openstack.org/wiki/CinderSteinPTGSummary Please review the contents and let me know if anything needs to be changed. Jay From mriedemos at gmail.com Tue Sep 18 22:30:05 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 18 Sep 2018 17:30:05 -0500 Subject: [openstack-dev] Forum Topic Submission Period In-Reply-To: <5B9FD2BB.3060806@openstack.org> References: <5B9FD2BB.3060806@openstack.org> Message-ID: <5b5a669d-144c-bcc2-306c-c6410ef705ef@gmail.com> On 9/17/2018 11:13 AM, Jimmy McArthur wrote: > Hello Everyone! > > The Forum Topic Submission session started September 12 and will run > through September 26th.  Now is the time to wrangle the topics you > gathered during your Brainstorming Phase and start pushing forum topics > through. Don't rely only on a PTL to make the agenda... step on up and > place the items you consider important front and center. > > As you may have noticed on the Forum Wiki > (https://wiki.openstack.org/wiki/Forum), we're reusing the normal CFP > tool this year. We did our best to remove Summit specific language, but > if you notice something, just know that you are submitting to the > Forum.  URL is here: > > https://www.openstack.org/summit/berlin-2018/call-for-presentations > > Looking forward to seeing everyone's submissions! > > If you have questions or concerns about the process, please don't > hesitate to reach out. > > Cheers, > Jimmy Just a process question. I submitted a presentation for the normal marketing blitz part of the summit which wasn't accepted (I'm still dealing with this emotionally, btw...) but when I look at the CFP link for Forum topics, my thing shows up there as "Received" so does that mean my non-Forum-at-all submission is now automatically a candidate for the Forum because that would not be my intended audience (only suits and big wigs please). -- Thanks, Matt From jungleboyj at gmail.com Tue Sep 18 22:34:16 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 18 Sep 2018 17:34:16 -0500 Subject: [openstack-dev] [cinder] Berlin Forum Proposals Message-ID: <271402d3-d722-d1c9-22cc-39f809428f05@gmail.com> Team, I have created an etherpad for our Forum Topic Planning: https://etherpad.openstack.org/p/cinder-berlin-forum-proposals Please add your ideas to the etherpad.  Thank you! Jay From jimmy at openstack.org Tue Sep 18 22:40:27 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 18 Sep 2018 17:40:27 -0500 Subject: [openstack-dev] Forum Topic Submission Period In-Reply-To: <5b5a669d-144c-bcc2-306c-c6410ef705ef@gmail.com> References: <5B9FD2BB.3060806@openstack.org> <5b5a669d-144c-bcc2-306c-c6410ef705ef@gmail.com> Message-ID: <5BA17EDB.5060701@openstack.org> Hey Matt, Matt Riedemann wrote: > > Just a process question. Good question. > I submitted a presentation for the normal marketing blitz part of the > summit which wasn't accepted (I'm still dealing with this emotionally, > btw...) If there's anything I can do... > but when I look at the CFP link for Forum topics, my thing shows up > there as "Received" so does that mean my non-Forum-at-all submission > is now automatically a candidate for the Forum because that would not > be my intended audience (only suits and big wigs please). Forum Submissions would be considered separate and non-Forum submissions will not be considered for the Forum. The submission process is based on the track you submit to and, in the case of the Forum, we separate this track out from the rest of the submission process. If you think there is still something funky, send me a note via speakersupport at openstack.org or jimmy at openstack.org and I'll work through it with you. Cheers, Jimmy From doug at doughellmann.com Tue Sep 18 22:58:42 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 18 Sep 2018 16:58:42 -0600 Subject: [openstack-dev] [User-committee][tc] Joint UC/TC Meeting In-Reply-To: References: Message-ID: <1537311496-sup-7778@lrrr.local> [Redirecting this from the openstack-tc list to the -dev list.] Excerpts from Melvin Hillsman's message of 2018-09-18 17:43:57 -0500: > Hey everyone, > > UC is proposing a joint UC/TC meeting at the end of the month say starting > after Berlin to work more closely together. The last Monday of the month at > 1pm US Central time is current proposal, throwing it out here now for > feedback/discussion, so that would make the first one Monday, November > 26th, 2018. > From duc.openstack at gmail.com Tue Sep 18 23:29:04 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Tue, 18 Sep 2018 16:29:04 -0700 Subject: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration In-Reply-To: References: Message-ID: Thanks for creating the etherpad. I have added a question on the common library in the etherpad. I think we can iterate on the basic proposal before the forum in Berlin so that we can get input from developers who won't be able to attend in person. On Tue, Sep 18, 2018 at 11:46 AM Rico Lin wrote: > cool Duc, and it's nicely started: > https://etherpad.openstack.org/p/autoscaling-integration-and-feedback > I also submit the etherpad, will add you as moderator once it's selected > (don't know why, but can't add any more now from the web). > > Please add whatever you like to that etherpad, I will try to input more > information ASAP. > all information will continue to be used with or without that forum. > > On Wed, Sep 19, 2018 at 12:51 AM Duc Truong > wrote: > >> Hi Rico, >> >> I'm the Senlin PTL and would be happy to have a forum discussion in >> Berlin about the future of autoscaling. >> >> Can you go ahead and start an etherpad to capture the proposed agenda >> and discussion items? Also, feel free to submit the forum submission >> so that we can get it on the schedule. >> >> Thanks, >> >> Duc (dtruong) >> >> On Mon, Sep 17, 2018 at 8:28 PM Rico Lin >> wrote: >> >>> *TL;DR* >>> *How about a forum in Berlin for discussing autoscaling integration (as >>> a long-term goal) in OpenStack?* >>> >>> >>> Hi all, as we start to discuss how can we join develop from Heat and >>> Senlin as we originally planned when we decided to fork Senlin from Heat >>> long time ago. >>> >>> IMO the biggest issues we got now are we got users using autoscaling in >>> both services, appears there is a lot of duplicated effort, and some great >>> enhancement didn't exist in another service. >>> As a long-term goal (from the beginning), we should try to join >>> development to sync functionality, and move users to use Senlin for >>> autoscaling. So we should start to review this goal, or at least we should >>> try to discuss how can we help users without break or enforce anything. >>> >>> What will be great if we can build common library cross projects, and >>> use that common library in both projects, make sure we have all improvement >>> implemented in that library, finally to use Senlin from that from that >>> library call in Heat autoscaling group. And in long-term, we gonna let all >>> user use more general way instead of multiple ways but generate huge >>> confusing for users. >>> >>> *As an action, I propose we have a forum in Berlin and sync up all >>> effort from both teams to plan for idea scenario design. The forum >>> submission [1] ended at 9/26.* >>> Also would benefit from both teams to start to think about how they can >>> modulize those functionalities for easier integration in the future. >>> >>> From some Heat PTG sessions, we keep bring out ideas on how can we >>> improve current solutions for Autoscaling. We should start to talk about >>> will it make sense if we combine all group resources into one, and inherit >>> from it for other resources (ideally gonna deprecate rest resource types). >>> Like we can do Batch create/delete in Resource Group, but not in ASG. We >>> definitely got some unsynchronized works inner Heat, and cross Heat and >>> Senlin. >>> >>> Please let me know who is interesting in this idea, so we can work >>> together and reach our goal step by step. >>> Also please provide though if you got any concerns about this proposal. >>> >>> [1] https://www.openstack.org/summit/berlin-2018/call-for-presentations >>> -- >>> May The Force of OpenStack Be With You, >>> >>> *Rico Lin*irc: ricolin >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Sep 18 23:52:41 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 18 Sep 2018 16:52:41 -0700 Subject: [openstack-dev] [All][TC] Stein TC Polling is open! Message-ID: Hello! The poll for the TC Election is now open and will remain open until Sep 27, 2018 23:45 UTC. We are selecting 6 TC members, please rank all candidates in your order of preference. For more information on condorcet voting and how it works, you can read more here[0]. You are eligible to vote if you are a Foundation individual member[1] that also has committed to one of the official programs projects[2] over the Aug 11, 2017 00:00 UTC - Aug 30, 2018 00:00 UTC timeframe (Queens to Rocky) or if you are one of the extra-atcs.[3] What to do if you don't see the email and have a commit in at least one of the official programs projects[2] and are a Foundation individual member: * check the trash or spam folder of your gerrit Preferred Email address[4], in case it went into trash or spam * wait a bit and check again, in case your email server (or CIVS) is a bit slow * find the sha of at least one commit from the program project repos[2] and the link to your foundation member profile and email them to the election officials[1]. If we can confirm that you are entitled to vote, we will add you to the voters list and you will be emailed a ballot. Our democratic process is important to the health of OpenStack, please exercise your right to vote! Candidate statements/platforms can be found linked to candidate names[6]. Happy voting! Thank you, -The Election Officials [0] https://en.wikipedia.org/wiki/Condorcet_method [1] http://www.openstack.org/community/members/ [2] https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2018-elections [3] Look for the extra-atcs element in [2] [4] Sign into review.openstack.org: Go to Settings > Contact Information. Look at the email listed as your preferred email. That is where the ballot has been sent. [5] http://governance.openstack.org/election/#election-officials [6] http://governance.openstack.org/election/#stein-tc-candidates -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Wed Sep 19 00:54:53 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 18 Sep 2018 20:54:53 -0400 Subject: [openstack-dev] Ops Forum Session Brainstorming In-Reply-To: References: Message-ID: This is a friendly reminder for anyone wishing to see Ops-focused sessions in Berlin to get your submissions in soon. We have a couple things there that came out of the PTG, but that's it so far. See below for details. Cheers, Erik On Wed, Sep 12, 2018, 5:07 PM Erik McCormick wrote: > Hello everyone, > > I have set up an etherpad to collect Ops related session ideas for the > Forum at the Berlin Summit. Please suggest any topics that you would > like to see covered, and +1 existing topics you like. > > https://etherpad.openstack.org/p/ops-forum-stein > > Cheers, > Erik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Wed Sep 19 01:28:16 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 18 Sep 2018 18:28:16 -0700 Subject: [openstack-dev] [octavia] Optimize the query of the octavia database In-Reply-To: References: <423483AB-0159-4C01-9CC5-A61AB24A4341@blizzard.com> Message-ID: Hi All, I have created a patch that resolves this regression: https://review.openstack.org/#/c/603242/ Please take a look. Locally it showed dramatic improvements. List load balancers went from two and half minutes down to seconds when I had a thousand Act/Stdby LBs. The patch may need some touch ups around testing, but the functionality should be good. We also have some team members working on Rally support for Octavia, so hopefully we will be able to catch a regression like this immediately in the future. Please support those efforts if you can contribute some time. Michael On Fri, Sep 14, 2018 at 6:01 PM Jeff Yang wrote: > > Ok, Thank you very much for your work. > > Adam Harwell 于2018年9月15日周六 上午8:26写道: >> >> It's high priority for me as well, so we should be able to get something done very soon, I think. Look for something early next week maybe? >> >> Thanks, >> --Adam >> >> On Thu, Sep 13, 2018, 21:18 Jeff Yang wrote: >>> >>> Thanks: >>> I found the correlative patch in neutron-lbaas: https://review.openstack.org/#/c/568361/ >>> >>> The bug was marked high level by our QA team. I need to fix it as soon as possible. >>> Does Michael Johnson have any good suggestion? I am willing to complete the >>> repair work of this bug. If your patch still takes a while to prepare. >>> >>> Michael Johnson 于2018年9月14日周五 上午7:56写道: >>>> >>>> This is a known regression in the Octavia API performance. It has an >>>> existing story[0] that is under development. You are correct, that >>>> star join is the root of the problem. >>>> Look for a patch soon. >>>> >>>> [0] https://storyboard.openstack.org/#!/story/2002933 >>>> >>>> Michael >>>> On Thu, Sep 13, 2018 at 10:32 AM Erik Olof Gunnar Andersson >>>> wrote: >>>> > >>>> > This was solved in neutron-lbaas recently, maybe we could adopt the same method for Octavia? >>>> > >>>> > Sent from my iPhone >>>> > >>>> > On Sep 13, 2018, at 4:54 AM, Jeff Yang wrote: >>>> > >>>> > Hi, All >>>> > >>>> > As octavia resources increase, I found that running the "openstack loadbalancer list" command takes longer and longer. Sometimes a 504 error is reported. >>>> > >>>> > By reading the code, I found that octavia will performs complex left outer join queries when acquiring resources such as loadbalancer, listener, pool, etc. in order to only make one trip to the database. >>>> > Reference code: http://paste.openstack.org/show/730022 Line 133 >>>> > Generated SQL statements: http://paste.openstack.org/show/730021 >>>> > >>>> > So, I suggest that adjust the query strategy to provide different join queries for different resources. >>>> > >>>> > https://storyboard.openstack.org/#!/story/2003751 >>>> > >>>> > __________________________________________________________________________ >>>> > OpenStack Development Mailing List (not for usage questions) >>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> > >>>> > __________________________________________________________________________ >>>> > OpenStack Development Mailing List (not for usage questions) >>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Wed Sep 19 02:52:50 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 18 Sep 2018 21:52:50 -0500 Subject: [openstack-dev] [python3] tempest and grenade conversion to python 3.6 In-Reply-To: <1537291156-sup-8745@lrrr.local> References: <20180918164654.zld7rzxessmpogns@bishop> <1537289625.1170266.1512393384.53855DF8@webmail.messagingengine.com> <1537291156-sup-8745@lrrr.local> Message-ID: On 9/18/2018 12:28 PM, Doug Hellmann wrote: > What's probably missing is a version of the grenade job that allows us > to control that USE_PYTHON3 variable before and after the upgrade. > > I see a few different grenade jobs (neutron-grenade, > neutron-grenade-multinode, > legacy-grenade-dsvm-neutron-multinode-live-migration, possibly others). > Which ones are "current" and would make a good candidate as a base for a > new job? Grenade just runs devstack on the old side (e.g. stable/rocky) using the devstack stackrc file (which could have USE_PYTHON3 in it), runs tempest 'smoke' tests to create some resources, saves off some information about those resources in a "database" (just an ini file), then runs devstack on the new side (e.g. master) using the new side stackrc file and verifies those saved off resources made it through the upgrade. It's all bash so there isn't anything python-specific about grenade. I saw, but didn't comment on, the other thread about if it would be possible to create a grenade-2to3 job. I'd think that is pretty doable based on the USE_PYTHON3 variable. We'd just have that False on the old side, and True on the new side, and devstack will do it's thing. Right now the USE_PYTHON3 variable is global in devstack-gate [1] (which is the thing that orchestrates the grenade run for the legacy jobs), but I'm sure we could hack that to be specific to the base (old) and target (new) release for the grenade run. [1] https://github.com/openstack-infra/devstack-gate/blob/95fa4343104eafa655375cce3546d27139211d13/devstack-vm-gate-wrap.sh#L434 -- Thanks, Matt From mriedemos at gmail.com Wed Sep 19 02:57:23 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 18 Sep 2018 21:57:23 -0500 Subject: [openstack-dev] [python3] tempest and grenade conversion to python 3.6 In-Reply-To: References: <20180918164654.zld7rzxessmpogns@bishop> <1537289625.1170266.1512393384.53855DF8@webmail.messagingengine.com> <1537291156-sup-8745@lrrr.local> Message-ID: On 9/18/2018 9:52 PM, Matt Riedemann wrote: > On 9/18/2018 12:28 PM, Doug Hellmann wrote: >> What's probably missing is a version of the grenade job that allows us >> to control that USE_PYTHON3 variable before and after the upgrade. >> >> I see a few different grenade jobs (neutron-grenade, >> neutron-grenade-multinode, >> legacy-grenade-dsvm-neutron-multinode-live-migration, possibly others). >> Which ones are "current" and would make a good candidate as a base for a >> new job? > > Grenade just runs devstack on the old side (e.g. stable/rocky) using the > devstack stackrc file (which could have USE_PYTHON3 in it), runs tempest > 'smoke' tests to create some resources, saves off some information about > those resources in a "database" (just an ini file), then runs devstack > on the new side (e.g. master) using the new side stackrc file and > verifies those saved off resources made it through the upgrade. It's all > bash so there isn't anything python-specific about grenade. > > I saw, but didn't comment on, the other thread about if it would be > possible to create a grenade-2to3 job. I'd think that is pretty doable > based on the USE_PYTHON3 variable. We'd just have that False on the old > side, and True on the new side, and devstack will do it's thing. Right > now the USE_PYTHON3 variable is global in devstack-gate [1] (which is > the thing that orchestrates the grenade run for the legacy jobs), but > I'm sure we could hack that to be specific to the base (old) and target > (new) release for the grenade run. > > [1] > https://github.com/openstack-infra/devstack-gate/blob/95fa4343104eafa655375cce3546d27139211d13/devstack-vm-gate-wrap.sh#L434 > > To answer Doug's original question, neutron-grenade-multinode is probably best to model for a new job if you want to test rolling upgrades, because that job has two compute nodes and leaves one on the 'old' side so it would upgrade the controller services and one compute to Stein and leave the other compute at Rocky. So if you start with python2 on the old side and upgrade to python3 for everything except one compute, you'll have a pretty good idea of whether or not that rolling upgrade works through our various services and libraries, like the oslo.messaging stuff noted in the other thread. -- Thanks, Matt From mtreinish at kortar.org Wed Sep 19 04:05:38 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 19 Sep 2018 00:05:38 -0400 Subject: [openstack-dev] [python3] tempest and grenade conversion to python 3.6 In-Reply-To: References: <20180918164654.zld7rzxessmpogns@bishop> <1537289625.1170266.1512393384.53855DF8@webmail.messagingengine.com> <1537291156-sup-8745@lrrr.local> Message-ID: <20180919040538.GA8418@zeong> On Tue, Sep 18, 2018 at 09:52:50PM -0500, Matt Riedemann wrote: > On 9/18/2018 12:28 PM, Doug Hellmann wrote: > > What's probably missing is a version of the grenade job that allows us > > to control that USE_PYTHON3 variable before and after the upgrade. > > > > I see a few different grenade jobs (neutron-grenade, > > neutron-grenade-multinode, > > legacy-grenade-dsvm-neutron-multinode-live-migration, possibly others). > > Which ones are "current" and would make a good candidate as a base for a > > new job? > > Grenade just runs devstack on the old side (e.g. stable/rocky) using the > devstack stackrc file (which could have USE_PYTHON3 in it), runs tempest > 'smoke' tests to create some resources, saves off some information about > those resources in a "database" (just an ini file), then runs devstack on > the new side (e.g. master) using the new side stackrc file and verifies > those saved off resources made it through the upgrade. It's all bash so > there isn't anything python-specific about grenade. This isn't quite right, we run devstack on the old side. But, on the new side we don't actually run devstack. Grenade updates the code, runs DB migrations (and any other mandatory upgrade steps), and then just relaunches the service. That's kind of the point to make sure new code works with old config. The target (ie new side) stackrc and localrc/local.conf are there for the common functions shared between devstack and grenade which are used to do things like pull the code and start services to make sure they run against the proper branches. Since there isn't any point in reimplementing the same exact thing. But we don't do a full devstack run, that's why you see only see stack.sh run once in the logs on a grenade job. > > I saw, but didn't comment on, the other thread about if it would be possible > to create a grenade-2to3 job. I'd think that is pretty doable based on the > USE_PYTHON3 variable. We'd just have that False on the old side, and True on > the new side, and devstack will do it's thing. Right now the USE_PYTHON3 > variable is global in devstack-gate [1] (which is the thing that > orchestrates the grenade run for the legacy jobs), but I'm sure we could > hack that to be specific to the base (old) and target (new) release for the > grenade run. I don't think this will work because we won't be running any initial python 3 setup on the system. I think it will just update paths and try to use python3 pip and python3 paths for things, but it will be missing the things it needs for those to work. It's probably worth a try either way (a quick experiment to say definitively) but my gut is telling me that it's not going to be that simple. -Matt Treinish > > [1] https://github.com/openstack-infra/devstack-gate/blob/95fa4343104eafa655375cce3546d27139211d13/devstack-vm-gate-wrap.sh#L434 > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Wed Sep 19 04:29:09 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 19 Sep 2018 13:29:09 +0900 Subject: [openstack-dev] [python3] tempest and grenade conversion to python 3.6 In-Reply-To: <1537291156-sup-8745@lrrr.local> References: <20180918164654.zld7rzxessmpogns@bishop> <1537289625.1170266.1512393384.53855DF8@webmail.messagingengine.com> <1537291156-sup-8745@lrrr.local> Message-ID: <165f016c8d0.1081ec30d206070.3272925134524044960@ghanshyammann.com> ---- On Wed, 19 Sep 2018 02:28:29 +0900 Doug Hellmann wrote ---- > Excerpts from Clark Boylan's message of 2018-09-18 09:53:45 -0700: > > On Tue, Sep 18, 2018, at 9:46 AM, Nate Johnston wrote: > > > Hello python 3.6 champions, > > > > > > I have looked around a little, and I don't see a method for me to > > > specifically select the version of python that the tempest and grenade > > > jobs for my project (neutron) are using. I assume one of four things > > > is at play here: > > > > > > A. These projects already shifted to python 3 and I don't have to worry > > > about it > > > > > > B. There is a toggle for the python version I just have not seen yet > > > > > > C. These projects are still on python 2 and need help to do a conversion > > > to python 3, which would affect all customers > > > > > > D. Something else that I have failed to imagine > > > > > > Could you elaborate which of these options properly reflects the state > > > of affairs? If the answer is "C" then perhaps we can start a discussion > > > on that migration. > > > > For our devstack and grenade jobs tempest is installed using tox [0]. And since the full testenv in tempest's tox.ini doesn't specify a python version [1] I expect that it will attempt a python2 virtualenv on every platform (Arch linux may be an exception but we don't test that). > > > > I think that means C is the situation here. To change that you can set basepython to python3 (see [2] for an example) which will run tempest under whichever python3 is present on the system. The one gotcha for this is that it will break tempest on centos which does not have python3. Maybe the thing to do there is add a full-python2 testenv that centos can run? > > > > [0] https://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/tempest#n653 > > [1] https://git.openstack.org/cgit/openstack/tempest/tree/tox.ini#n74 > > [2] https://git.openstack.org/cgit/openstack-infra/zuul/tree/tox.ini#n7 > > > > Hope this helps, > > Clark > > > > While having tempest run under python 3 would be great, I'm not sure > that's necessary in order to test a service. > > Don't those jobs use devstack to install the system being tested? And > devstack uses some environment variables to control the version of > python. For example the tempest-full-py3 job [1] defines USE_PYTHON3 as > 'true'. > > What's probably missing is a version of the grenade job that allows us > to control that USE_PYTHON3 variable before and after the upgrade. > > I see a few different grenade jobs (neutron-grenade, > neutron-grenade-multinode, > legacy-grenade-dsvm-neutron-multinode-live-migration, possibly others). > Which ones are "current" and would make a good candidate as a base for a > new job? All these are legacy job, only name changed so i will not recommend them to use as base. Currently those are on neutron repo instead of grenade. We discussed this in PTG about finishing the grenade base zuul v3 job work so that other project can use as base. Work is in progress[1] and on priority[2] for us to finish as early as possible. [1] https://review.openstack.org/#/q/topic:grenade_zuulv3+(status:open+OR+status:merged) [2] https://etherpad.openstack.org/p/qa-stein-priority -gmann > > Doug > > [1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n70 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zhipengh512 at gmail.com Wed Sep 19 04:35:52 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 19 Sep 2018 12:35:52 +0800 Subject: [openstack-dev] [User-committee] [publiccloud-wg] Meeting tomorrow In-Reply-To: <70976fdd-3d0f-dafa-a792-4cb4daf96af1@citynetwork.eu> References: <70976fdd-3d0f-dafa-a792-4cb4daf96af1@citynetwork.eu> Message-ID: cc'ed sig list. Kind reminder for the meeting about 2 and half hours away, we will do a review of the denver ptg summary [0] and then go over the forum sessions which we want to propose [1] This is an EU/APAC friendly meeting so please do join us if you are in the region :) [0]https://etherpad.openstack.org/p/publiccloud-wg-stein-ptg-summary [1]https://etherpad.openstack.org/p/BER-forum-public-cloud On Tue, Sep 18, 2018 at 8:05 PM Tobias Rydberg < tobias.rydberg at citynetwork.eu> wrote: > Hi everyone, > > Don't forget that we have a meeting tomorrow at 0700 UTC at IRC channel > #openstack-publiccloud. > > See you all there! > > Cheers, > Tobias > > -- > Tobias Rydberg > Senior Developer > Twitter & IRC: tobberydberg > > www.citynetwork.eu | www.citycloud.com > > INNOVATION THROUGH OPEN IT INFRASTRUCTURE > ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED > > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain.bauza at gmail.com Wed Sep 19 07:56:15 2018 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Wed, 19 Sep 2018 09:56:15 +0200 Subject: [openstack-dev] Forum Topic Submission Period In-Reply-To: <5BA17EDB.5060701@openstack.org> References: <5B9FD2BB.3060806@openstack.org> <5b5a669d-144c-bcc2-306c-c6410ef705ef@gmail.com> <5BA17EDB.5060701@openstack.org> Message-ID: Le mer. 19 sept. 2018 à 00:41, Jimmy McArthur a écrit : > Hey Matt, > > > Matt Riedemann wrote: > > > > Just a process question. > > Good question. > > I submitted a presentation for the normal marketing blitz part of the > > summit which wasn't accepted (I'm still dealing with this emotionally, > > btw...) > Same as I do :-) Unrelated point, for the first time in all the Summits I know, I wasn't able to know the track chairs for a specific track. Ideally, I'd love to reach them in order to know what they disliked in my proposal. > If there's anything I can do... > > but when I look at the CFP link for Forum topics, my thing shows up > > there as "Received" so does that mean my non-Forum-at-all submission > > is now automatically a candidate for the Forum because that would not > > be my intended audience (only suits and big wigs please). > Forum Submissions would be considered separate and non-Forum submissions > will not be considered for the Forum. The submission process is based on > the track you submit to and, in the case of the Forum, we separate this > track out from the rest of the submission process. > > If you think there is still something funky, send me a note via > speakersupport at openstack.org or jimmy at openstack.org and I'll work > through it with you. > > I have another question, do you know why we can't propose a Forum session with multiple speakers ? Is this a bug or an expected behaviour ? In general, there is only one moderator for a Forum session, but in the past, I clearly remember we had some sessions that were having multiple moderators (for various reasons). -Sylvain Cheers, > Jimmy > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Wed Sep 19 09:42:26 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 19 Sep 2018 11:42:26 +0200 Subject: [openstack-dev] [cinder] Berlin Forum Proposals In-Reply-To: <271402d3-d722-d1c9-22cc-39f809428f05@gmail.com> References: <271402d3-d722-d1c9-22cc-39f809428f05@gmail.com> Message-ID: <20180919094226.ksvxaucta5bifwpx@localhost> On 18/09, Jay S Bryant wrote: > Team, > > I have created an etherpad for our Forum Topic Planning: > https://etherpad.openstack.org/p/cinder-berlin-forum-proposals > > Please add your ideas to the etherpad.  Thank you! > > Jay > Hi Jay, After our last IRC meeting, a couple of weeks ago, I created an etherpad [1] and added it to the Forum wiki [2] (though I failed to mention it). I had added a possible topic to this etherpad [1], but I can move it to yours and update the wiki if you like. Cheers, Gorka. [1]: https://etherpad.openstack.org/p/cinder-forum-stein [2]: https://wiki.openstack.org/wiki/Forum/Berlin2018 From geguileo at redhat.com Wed Sep 19 09:52:49 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 19 Sep 2018 11:52:49 +0200 Subject: [openstack-dev] [ptg][cinder] Stein PTG Summary Page Ready ... In-Reply-To: <78636766-8656-5110-2526-3cb5a361e06c@gmail.com> References: <78636766-8656-5110-2526-3cb5a361e06c@gmail.com> Message-ID: <20180919095249.mfutrq74ynkdwgvh@localhost> On 18/09, Jay S Bryant wrote: > Team, > > I have put together the following page with a summary of all our discussions > at the PTG: https://wiki.openstack.org/wiki/CinderSteinPTGSummary > > Please review the contents and let me know if anything needs to be changed. > > Jay > > Hi Jay, Thank you for the great summary, it looks great. After reading it, I can't think of anything that's missing. Cheers, Gorka. From swamireddy at gmail.com Wed Sep 19 10:25:05 2018 From: swamireddy at gmail.com (M Ranga Swami Reddy) Date: Wed, 19 Sep 2018 15:55:05 +0530 Subject: [openstack-dev] GUI for Swift object storage In-Reply-To: References: Message-ID: Hi Clay - Thanks for sharing the details. On Tue, Sep 18, 2018 at 7:09 PM Clay Gerrard wrote: > > I don't know about a good open source cross-platform GUI client, but the SwiftStack one is slick and doesn't seem to be behind a paywall (yet?) > > https://www.swiftstack.com/downloads > > There's probably some proprietary integration that won't make sense - but it should work with any Swift end-point. Let me know how it goes! > > -Clay > > N.B. IANAL, so you should probably double check the license/terms if you're planning on doing anything more sophisticated than personal use. > > On Mon, Sep 17, 2018 at 9:31 PM M Ranga Swami Reddy wrote: >> >> All GUI tools are non open source...need to pay like cyberduck etc. >> Looking for open source GUI for Swift API access. >> >> On Tue, 18 Sep 2018, 06:41 John Dickinson, wrote: >>> >>> That's a great question. >>> >>> A quick google search shows a few like Swift Explorer, Cyberduck, and Gladinet. But since Swift supports the S3 API (check with your cluster operator to see if this is enabled, or examine the results of a GET /info request), you can use any available S3 GUI client as well (as long as you can configure the endpoints you connect to). >>> >>> --John >>> >>> On 17 Sep 2018, at 16:48, M Ranga Swami Reddy wrote: >>> >>> Hi - is there any GUI (open source) available for Swift objects storage? >>> >>> Thanks >>> Swa >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From swamireddy at gmail.com Wed Sep 19 10:25:23 2018 From: swamireddy at gmail.com (M Ranga Swami Reddy) Date: Wed, 19 Sep 2018 15:55:23 +0530 Subject: [openstack-dev] GUI for Swift object storage In-Reply-To: References: Message-ID: Hi Tim - Thanks for sharing the details. Thanks Swami On Tue, Sep 18, 2018 at 10:05 PM Tim Burke wrote: > > Hate to nitpick, but Cyberduck is licensed GPLv3 -- you can browse the source (and confirm the license) at https://trac.cyberduck.io/browser/trunk and https://trac.cyberduck.io/ indicates the source is available via git or svn. They do nag you to donate, though. > > Swift explorer is Apache 2, available at https://github.com/roikku/swift-explorer. I don't know anything about Gladinet. > > Tim > > On Sep 17, 2018, at 7:31 PM, M Ranga Swami Reddy wrote: > > All GUI tools are non open source...need to pay like cyberduck etc. > Looking for open source GUI for Swift API access. > > On Tue, 18 Sep 2018, 06:41 John Dickinson, wrote: >> >> That's a great question. >> >> A quick google search shows a few like Swift Explorer, Cyberduck, and Gladinet. But since Swift supports the S3 API (check with your cluster operator to see if this is enabled, or examine the results of a GET /info request), you can use any available S3 GUI client as well (as long as you can configure the endpoints you connect to). >> >> --John >> >> On 17 Sep 2018, at 16:48, M Ranga Swami Reddy wrote: >> >> Hi - is there any GUI (open source) available for Swift objects storage? >> >> Thanks >> Swa >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dtantsur at redhat.com Wed Sep 19 11:16:31 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 19 Sep 2018 13:16:31 +0200 Subject: [openstack-dev] [Openstack-sigs] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: Message-ID: <5d24ac83-5636-b639-abac-f9523a111409@redhat.com> On 9/18/18 9:27 PM, Matt Riedemann wrote: > The release page says Ocata is planned to go into extended maintenance mode on > Aug 27 [1]. There really isn't much to this except it means we don't do releases > for Ocata anymore [2]. There is a caveat that project teams that do not wish to > maintain stable/ocata after this point can immediately end of life the branch > for their project [3]. We can still run CI using tags, e.g. if keystone goes > ocata-eol, devstack on stable/ocata can still continue to install from > stable/ocata for nova and the ocata-eol tag for keystone. Having said that, if > there is no undue burden on the project team keeping the lights on for > stable/ocata, I would recommend not tagging the stable/ocata branch end of life > at this point. > > So, questions that need answering are: > > 1. Should we cut a final release for projects with stable/ocata branches before > going into extended maintenance mode? I tend to think "yes" to flush the queue > of backports. In fact, [3] doesn't mention it, but the resolution said we'd tag > the branch [4] to indicate it has entered the EM phase. Some ironic projects have outstanding changes, I guess we should release them. > > 2. Are there any projects that would want to skip EM and go directly to EOL (yes > this feels like a Monopoly question)? > > [1] https://releases.openstack.org/ > [2] > https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > > [3] > https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance > > [4] > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life > > From cdent+os at anticdent.org Wed Sep 19 11:31:26 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 19 Sep 2018 12:31:26 +0100 (BST) Subject: [openstack-dev] [User-committee] [tc] Joint UC/TC Meeting In-Reply-To: <1537311496-sup-7778@lrrr.local> References: <1537311496-sup-7778@lrrr.local> Message-ID: On Tue, 18 Sep 2018, Doug Hellmann wrote: > [Redirecting this from the openstack-tc list to the -dev list.] > Excerpts from Melvin Hillsman's message of 2018-09-18 17:43:57 -0500: >> UC is proposing a joint UC/TC meeting at the end of the month say starting >> after Berlin to work more closely together. The last Monday of the month at >> 1pm US Central time is current proposal, throwing it out here now for >> feedback/discussion, so that would make the first one Monday, November >> 26th, 2018. I agree that the UC and TC should work more closely together. If the best way to do that is to have a meeting then great, let's do it. We're you thinking IRC or something else? But we probably need to resolve our ambivalence towards meetings. On Sunday at the PTG we discussed maybe going back to having a TC meeting but didn't realy decide (at least as far as I recall) and didn't discuss in too much depth the reasons why we killed meetings in the first place. How would this meeting be different? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From dangtrinhnt at gmail.com Wed Sep 19 12:52:49 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 19 Sep 2018 21:52:49 +0900 Subject: [openstack-dev] [Searchlight] vPTG tomorrow Message-ID: Hi team, As we agreed on team's channel, tomorrow we will have our vPTG for Stein. Please see below for details: - Time: 12:00~14:00 UTC, 20th Sep. - Meeting channel: https://hangouts.google.com/group/7PeiryADgQvyweoF3 - Etherpad: https://etherpad.openstack.org/p/searchlight-stein-ptg Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Wed Sep 19 13:13:24 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 19 Sep 2018 08:13:24 -0500 Subject: [openstack-dev] Forum Topic Submission Period In-Reply-To: References: <5B9FD2BB.3060806@openstack.org> <5b5a669d-144c-bcc2-306c-c6410ef705ef@gmail.com> <5BA17EDB.5060701@openstack.org> Message-ID: <5BA24B74.8010301@openstack.org> Sylvain Bauza wrote: > > > Le mer. 19 sept. 2018 à 00:41, Jimmy McArthur > a écrit : SNIP > > > Same as I do :-) Unrelated point, for the first time in all the > Summits I know, I wasn't able to know the track chairs for a specific > track. Ideally, I'd love to reach them in order to know what they > disliked in my proposal. They were listed on an Etherpad that was listed under Presentation Selection Process in the CFP navigation. That has since been overwritten w/ Forum Selection Process, so let me try to dig that up. We publish the Track Chairs every year. > SNIP > > I have another question, do you know why we can't propose a Forum > session with multiple speakers ? Is this a bug or an expected > behaviour ? In general, there is only one moderator for a Forum > session, but in the past, I clearly remember we had some sessions that > were having multiple moderators (for various reasons). Correct. Forum sessions aren't meant to have speakers like a normal presentation. They are all set up parliamentary style w/ one or more moderators. However, the moderator can manage the room any way they'd like. If you want to promote the people that will be in the room, this can be added to the abstract. > > -Sylvain > > > Cheers, > Jimmy > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Sep 19 13:25:23 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 19 Sep 2018 14:25:23 +0100 (BST) Subject: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper" Message-ID: I have a patch in progress to add some simple integration tests to placement: https://review.openstack.org/#/c/601614/ They use https://github.com/cdent/gabbi-tempest . The idea is that the method for adding more tests is to simply add more yaml in gate/gabbits, without needing to worry about adding to or think about tempest. What I have at that patch works; there are two yaml files, one of which goes through the process of confirming the existence of a resource provider and inventory, booting a server, seeing a change in allocations, resizing the server, seeing a change in allocations. But this is kludgy in a variety of ways and I'm hoping to get some help or pointers to the right way. I'm posting here instead of asking in IRC as I assume other people confront these same confusions. The issues: * The associated playbooks are cargo-culted from stuff labelled "legacy" that I was able to find in nova's jobs. I get the impression that these are more verbose and duplicative than they need to be and are not aligned with modern zuul v3 coolness. * It takes an age for the underlying devstack to build, I can presumably save some time by installing fewer services, and making it obvious how to add more when more are required. What's the canonical way to do this? Mess with {enable,disable}_service, cook the ENABLED_SERVICES var, do something with required_projects? * This patch, and the one that follows it [1] dynamically install stuff from pypi in the post test hooks, simply because that was the quick and dirty way to get those libs in the environment. What's the clean and proper way? gabbi-tempest itself needs to be in the tempest virtualenv. * The post.yaml playbook which gathers up logs seems like a common thing, so I would hope could be DRYed up a bit. What's the best way to that? Thanks very much for any input. [1] perf logging of a loaded placement: https://review.openstack.org/#/c/602484/ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From liliueecg at gmail.com Wed Sep 19 14:51:37 2018 From: liliueecg at gmail.com (Li Liu) Date: Wed, 19 Sep 2018 10:51:37 -0400 Subject: [openstack-dev] [cyborg] Weekly meeting canceled for this week Message-ID: Hi team, The Cyborg weekly meeting today is canceled as most of the folks in China are still fighting with jet lag from Denver. -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Sep 19 14:02:10 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 19 Sep 2018 16:02:10 +0200 Subject: [openstack-dev] [Neutron] Removing external_bridge_name config option Message-ID: <30362473-35B0-499F-BEED-E219BC4FFA07@redhat.com> Hi, Some time ago I proposed patch [1] to remove config option „external_network_bridge”. This option was deprecated to removal in Ocata so I think it’s time to get rid of it finally. There is quite many projects which still uses this option [2]. I will try to propose patches for those projects to remove it also from there but if You are maintainer of such project, it would be great if You can remove it. If You would do it, please use same topic as is in [1] - it will allow me easier track which projects already removed it. Thx a lot in advance for any help :) [1] https://review.openstack.org/#/c/567369 [2] http://codesearch.openstack.org/?q=external_network_bridge&i=nope&files=&repos= — Slawek Kaplonski Senior software engineer Red Hat From samuel at cassi.ba Wed Sep 19 14:06:04 2018 From: samuel at cassi.ba (Samuel Cassiba) Date: Wed, 19 Sep 2018 07:06:04 -0700 Subject: [openstack-dev] [chef] fog-openstack 0.3 Message-ID: Ohai! fog-openstack 0.3 has been released upstream, but it also seems to be a breaking release by way of naming convention. At this time, it is advised to pin your client cookbook at '<0.3.0'. Changes to compensate for this change are being delivered to git and Supermarket, but the most immediate workaround is to pin. Once things are working with fog-openstack 0.3, ChefDK will pick the new version up in a later release. Thank you for your attention. -scas From mordred at inaugust.com Wed Sep 19 14:23:53 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 19 Sep 2018 09:23:53 -0500 Subject: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper" In-Reply-To: References: Message-ID: On 09/19/2018 08:25 AM, Chris Dent wrote: > > I have a patch in progress to add some simple integration tests to > placement: > >     https://review.openstack.org/#/c/601614/ > > They use https://github.com/cdent/gabbi-tempest . The idea is that > the method for adding more tests is to simply add more yaml in > gate/gabbits, without needing to worry about adding to or think > about tempest. > > What I have at that patch works; there are two yaml files, one of > which goes through the process of confirming the existence of a > resource provider and inventory, booting a server, seeing a change > in allocations, resizing the server, seeing a change in allocations. > > But this is kludgy in a variety of ways and I'm hoping to get some > help or pointers to the right way. I'm posting here instead of > asking in IRC as I assume other people confront these same > confusions. The issues: > > * The associated playbooks are cargo-culted from stuff labelled >   "legacy" that I was able to find in nova's jobs. I get the >   impression that these are more verbose and duplicative than they >   need to be and are not aligned with modern zuul v3 coolness. Yes. Your life will be much better if you do not make more legacy jobs. They are brittle and hard to work with. New jobs should either use the devstack base job, the devstack-tempest base job or the devstack-tox-functional base job - depending on what things are intended. You might want to check out: https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html also, cmurphy has been working on updating some of keystone's legacy jobs recently: https://review.openstack.org/602452 which might also be a source for copying from. > * It takes an age for the underlying devstack to build, I can >   presumably save some time by installing fewer services, and making >   it obvious how to add more when more are required. What's the >   canonical way to do this? Mess with {enable,disable}_service, cook >   the ENABLED_SERVICES var, do something with required_projects? http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n190 Has an example of disabling services, of adding a devstack plugin, and of adding some lines to localrc. http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n117 Has some more complex config bits in it. In your case, I believe you want to have parent: devstack-tempest instead of parent: devstack-tox-functional > * This patch, and the one that follows it [1] dynamically install >   stuff from pypi in the post test hooks, simply because that was >   the quick and dirty way to get those libs in the environment. >   What's the clean and proper way? gabbi-tempest itself needs to be >   in the tempest virtualenv. This I don't have an answer for. I'm guessing this is something one could do with a tempest plugin? > * The post.yaml playbook which gathers up logs seems like a common >   thing, so I would hope could be DRYed up a bit. What's the best >   way to that? Yup. Legacy devstack-gate based jobs are pretty terrible. You can delete the entire post.yaml if you move to the new devstack base job. The base devstack job has a much better mechanism for gathering logs. > Thanks very much for any input. > > [1] perf logging of a loaded placement: > https://review.openstack.org/#/c/602484/ > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mordred at inaugust.com Wed Sep 19 14:29:46 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 19 Sep 2018 09:29:46 -0500 Subject: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper" In-Reply-To: References: Message-ID: <79ef4d5a-9816-bacb-2958-60899c021039@inaugust.com> On 09/19/2018 09:23 AM, Monty Taylor wrote: > On 09/19/2018 08:25 AM, Chris Dent wrote: >> >> I have a patch in progress to add some simple integration tests to >> placement: >> >>      https://review.openstack.org/#/c/601614/ >> >> They use https://github.com/cdent/gabbi-tempest . The idea is that >> the method for adding more tests is to simply add more yaml in >> gate/gabbits, without needing to worry about adding to or think >> about tempest. >> >> What I have at that patch works; there are two yaml files, one of >> which goes through the process of confirming the existence of a >> resource provider and inventory, booting a server, seeing a change >> in allocations, resizing the server, seeing a change in allocations. >> >> But this is kludgy in a variety of ways and I'm hoping to get some >> help or pointers to the right way. I'm posting here instead of >> asking in IRC as I assume other people confront these same >> confusions. The issues: >> >> * The associated playbooks are cargo-culted from stuff labelled >>    "legacy" that I was able to find in nova's jobs. I get the >>    impression that these are more verbose and duplicative than they >>    need to be and are not aligned with modern zuul v3 coolness. > > Yes. Your life will be much better if you do not make more legacy jobs. > They are brittle and hard to work with. > > New jobs should either use the devstack base job, the devstack-tempest > base job or the devstack-tox-functional base job - depending on what > things are intended. > > You might want to check out: > > https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html > > also, cmurphy has been working on updating some of keystone's legacy > jobs recently: > > https://review.openstack.org/602452 > > which might also be a source for copying from. > >> * It takes an age for the underlying devstack to build, I can >>    presumably save some time by installing fewer services, and making >>    it obvious how to add more when more are required. What's the >>    canonical way to do this? Mess with {enable,disable}_service, cook >>    the ENABLED_SERVICES var, do something with required_projects? > > http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n190 > > Has an example of disabling services, of adding a devstack plugin, and > of adding some lines to localrc. > > > http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n117 > > Has some more complex config bits in it. > > In your case, I believe you want to have parent: devstack-tempest > instead of parent: devstack-tox-functional > > >> * This patch, and the one that follows it [1] dynamically install >>    stuff from pypi in the post test hooks, simply because that was >>    the quick and dirty way to get those libs in the environment. >>    What's the clean and proper way? gabbi-tempest itself needs to be >>    in the tempest virtualenv. > > This I don't have an answer for. I'm guessing this is something one > could do with a tempest plugin? K. This: http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/.zuul.yaml#n184 Has an example of a job using a tempest plugin. >> * The post.yaml playbook which gathers up logs seems like a common >>    thing, so I would hope could be DRYed up a bit. What's the best >>    way to that? > > Yup. Legacy devstack-gate based jobs are pretty terrible. > > You can delete the entire post.yaml if you move to the new devstack base > job. > > The base devstack job has a much better mechanism for gathering logs. > >> Thanks very much for any input. >> >> [1] perf logging of a loaded placement: >> https://review.openstack.org/#/c/602484/ >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From colleen at gazlene.net Wed Sep 19 14:37:28 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 19 Sep 2018 16:37:28 +0200 Subject: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper" In-Reply-To: References: Message-ID: <1537367848.3306917.1513544104.45E40257@webmail.messagingengine.com> On Wed, Sep 19, 2018, at 4:23 PM, Monty Taylor wrote: > On 09/19/2018 08:25 AM, Chris Dent wrote: > > > also, cmurphy has been working on updating some of keystone's legacy > jobs recently: > > https://review.openstack.org/602452 > > which might also be a source for copying from. > Disclaimer before anyone blindly copies: https://bit.ly/2vq26SR From mriedemos at gmail.com Wed Sep 19 14:41:45 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 19 Sep 2018 09:41:45 -0500 Subject: [openstack-dev] [nova] When can/should we change additionalProperties=False in GET /servers(/detail)? In-Reply-To: <46cee3db-eecb-97f6-a793-c33d57a71ad2@gmail.com> References: <70abbabe-2480-4c25-0665-a14b2eb5f3ab@gmail.com> <75ef2549-dfba-3267-5e76-0c59c64cd4ac@gmail.com> <165ea8d9f10.add97103175992.5456929857422374986@ghanshyammann.com> <46cee3db-eecb-97f6-a793-c33d57a71ad2@gmail.com> Message-ID: <940422db-3093-4144-d33e-7954e366fb64@gmail.com> On 9/18/2018 12:26 PM, Matt Riedemann wrote: > On 9/17/2018 9:41 PM, Ghanshyam Mann wrote: >>   ---- On Tue, 18 Sep 2018 09:33:30 +0900 Alex Xu >> wrote ---- >>   > That only means after 599276 we only have servers API and >> os-instance-action API stopped accepting the undefined query parameter. >>   > What I'm thinking about is checking all the APIs, add >> json-query-param checking with additionalProperties=True if the API >> don't have yet. And using another microversion set >> additionalProperties to False, then the whole Nova API become consistent. >> >> I too vote for doing it for all other API together. Restricting the >> unknown query or request param are very useful for API consistency. >> Item#1 in this etherpadhttps://etherpad.openstack.org/p/nova-api-cleanup >> >> If you would like, i can propose a quick spec for that and positive >> response to do all together then we skip to do that in 599276 >> otherwise do it for GET servers in 599276. >> >> -gmann > > I don't care too much about changing all of the other > additionalProperties=False in a single microversion given we're already > kind of inconsistent with this in a few APIs. Consistency is ideal, but > I thought we'd be lumping in other cleanups from the etherpad into the > same microversion/spec which will likely slow it down during spec > review. For example, I'd really like to get rid of the weird server > response field prefixes like "OS-EXT-SRV-ATTR:". Would we put those into > the same mass cleanup microversion / spec or split them into individual > microversions? I'd prefer not to see an explosion of microversions for > cleaning up oddities in the API, but I could see how doing them all in a > single microversion could be complicated. Just an update on https://review.openstack.org/#/c/599276/ - the change is approved. We left additionalProperties=True in the GET /servers(/detail) APIs for consistency with 2.5 and 2.26, and for expediency in just getting the otherwise pretty simple change approved. -- Thanks, Matt From jim at jimrollenhagen.com Wed Sep 19 14:49:55 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 19 Sep 2018 08:49:55 -0600 Subject: [openstack-dev] [ironic][edge] Notes from the PTG Message-ID: I wrote up some notes from my perspective at the PTG for some internal teams and figured I may as well share them here. They're primarily from the ironic and edge WG rooms. Fairly raw, very long, but hopefully useful to someone. Enjoy. Tuesday: edge Edge WG (IMHO) has historically just talked about use cases, hand-waved a bit, and jumped to requiring an autonomous control plane per edge site - thus spending all of their time talking about how they will make glance and keystone sync data between control planes. penick described roughly what we do with keystone/athenz and how that can be used in a federated keystone deployment to provide autonomy for any control plane, but also a single view via a global keystone. penick and I both kept pushing for people to define a real architecture, and we ended up with 10-15 people huddled around an easel for most of the afternoon. Of note: - Windriver (and others?) refuse to budge on the many control plane thing - This means that they will need some orchestration tooling up top in the main DC / client machines to even come close to reasonably managing all of these sites - They will probably need some syncing tooling - glance->glance isn’t a thing, no matter how many people say it is. - Glance PTL recommends syncing metadata outside of glance process, and a global(ly distributed?) glance backend. - We also defined the single pane of glass architecture that Oath plans to deploy - Okay with losing connectivity from central control plane to single edge site - Each edge site is a cell - Each far edge site is just compute nodes - Still may want to consider image distribution to edge sites so we don’t have to go back to main DC? - Keystone can be distributed the same as first architecture - Nova folks may start investigating putting API hosts at the cell level to get the best of both worlds - if there’s a network partition, can still talk to cell API to manage things - Need to think about removing the need for rabbitmq between edge and far edge - Kafka was suggested in the edge room for oslo.messaging in general - Etcd watchers may be another option for an o.msg driver - Other other options are more invasive into nova - involve changing how nova-compute talks to conductor (etcd, etc) or even putting REST APIs in nova-compute (and nova-conductor?) - Neutron is going to work on an OVS “superagent” - superagent does the RPC handling, talks some other way to child agents. Intended to scale to thousands of children. Primary use case is smart nics but seems like a win for the edge case as well. penick took an action item to draw up the architecture diagrams in a digestable format. Wednesday: ironic things Started with a retrospective. See https://etherpad.openstack.org/p/ironic-stein-ptg-retrospective for the notes - there wasn’t many surprising things here. We did discuss trying to target some quick wins for the beginning of the cycle, so that we didn’t have all of our features trying to land at the end. Using wsgi with the ironic-api was mentioned as a potential regression, but we agreed it’s a config/documentation issue. I took an action to make a task to document this better. Next we quickly reviewed our vision doc, and people didn’t have much to say about it. Metalsmith: it’s a thing, it’s being included into the ironic project. Dmitry is open to optionally supporting placement. Multiple instances will be a feature in the future. Otherwise mostly feature complete, goal is to keep it simple. Networking-ansible: redhat building tooling that integrates with upstream ansible modules for networking gear. Kind of an alternative to n-g-s. Not really much on plans here, RH just wanted to introduce it to the community. Some discussion about it possibly replacing n-g-s later, but no hard plans. Deploy steps/templates: we talked about what the next steps are, and what an MVP looks like. Deploy templates are triggered by the traits that nodes are scheduled against, and can add steps before or after (or in between?) the default deploy steps. We agreed that we should add a RAID deploy step, with standing questions for how arguments are passed to that deploy step, and what the defaults look like. Myself and mgoddard took an action item to open an RFE for this. We also agreed that we should start thinking about how the current (only) deploy step should be split into multiple steps. Graphical console: we discussed what the next steps are for this work. We agreed that we should document the interface and what is returned (a URL), and also start working on a redfish driver for graphical consoles. We also noted that we can test in the gate with qemu, but we only need to test that a correct URL is returned, not that the console actually works (because we don’t really care that qemu’s console works). Python 3: we talked about the changes to our jobs that are needed. We agreed to use the base name of the jobs for Python 3 (as those will be used for a long time), and add a “python2” prefix for the Python 2 jobs. We also discussed dropping certain coverage for Python 2, as our CI jobs tend to mostly test the same codepaths with some config differences. Last, we talked about mixed environment Python 2 and 3 testing, as this will be a thing people doing rolling upgrades of Python versions will hit. I sent an email to the ML asking if others had done or thought about this, and it sounds like we can limit that testing to oslo.messaging, and a task was reported there. Pre-upgrade checks: Not much was discussed here; TheJulia is going to look into it. One item of note is that there is an oslo project being proposed that can carry some of the common code for this. Performance improvements: We first discussed our virt driver’s performance. It was found that Nova’s power sync loop makes a call to Ironic for each instance that the compute service is managing. We do some node caching in our driver that would be useful for this. I took an action item to look into it, and have a WIP patch: https://review.openstack.org/#/c/602127/ . That patch just needs a bug filed and unit tests written. On Thursday, we talked with Nova about other performance things, and agreed we should implement a hook in Nova that Ironic can do to say “power changed” and “deploy done” and other things like this. This will help reduce or eliminate polling from our virt driver to Ironic, and also allow Nova to notice these changes faster. More on that later? Splitting the conductor: we discussed the many tasks the conductor is responsible for, and pondered if we could or should split things up. This has implications (good and bad) for operability, scalability, and security. Splitting the conductor to multiple workers would allow operators to use different security models for different tasks (e.g. only allowing an “OOB worker” access to the OOB network). It would also allow folks to scale out workers that do lots of work (like the power status loop) separately from those that do minimal work (writing PXE configs). I intend to investigate this more during this cycle and lay out a plan for doing the work. This also may require better distributed locking, which TheJulia has started investigating. Changing boot mode defaults: Apparently Intel is going to stop shipping hardware that is capable of legacy BIOS booting in 2020. We agreed that we should work toward changing the default boot mode to UEFI to better prepare our users, but we can’t drop legacy BIOS mode until all of the old hardware in the world is gone. TheJulia is going to dig through the code and make a task list. UEFI HTTPClient booting: This is a DHCP class that allows the DHCP server to return a URL instead of a “next-server” (TFTP location) response. This is a clear value add, and TheJulia is going to work on it as she is already neck deep in that area of code. We also need to ensure that Neutron supports this. It should, as it’s just more DHCP options, but we need to verify. SecureBoot: I presented Oath’s secureboot model, which doesn’t depend on a centralized attestation server. It made sense to people, and we discussed putting the driver in tree. The process does rely on some enhancements to iPXE, so Oath is going to investigate upstreaming those changes and publishing more documentation, and then an in-tree driver should be no problem. We also discussed Ironic’s current SecureBoot (TrustedBoot?) implementations. Currently it only works with PXE, not iPXE or Grub2. TheJulia is going to look into adding this support. We should be able to do CI jobs for it, as TPM 1.2 and 2.0 emulation both seem to be supported in QEMU as of 2.11. NIC PXE configuration as a clean step: the DRAC driver team has a desire to configure NICs for PXE or not, and sync with the ironic database’s pxe_enabled field. This has gone back and forth in IRC. We were able to resolve some of the issues with it, and rpioso is going to write a small spec to make sure we get the details right. Thursday: more ironic things Neutron cross-project discussion: we discussed SmartNICs, which the Neutron team had also discussed the previous day. In short, SmartNICs are NICs that run OVS. The Neutron team discussed the scalability of their OVS agent running across thousands of machines, and are planning to make some sort of “superagent”. This superagent essentially owns a group of OVS agents. It will talk to Neutron over rabbit as usual, but then use some other protocol to talk to the OVS agents it is managing. This should help with rabbit load even in “standard” Openstack environments, and is especially useful (to me) for minimizing rabbitmq connections from far edge sites. The catch with SmartNICs and Ironic is that the NICs must have power to be configured (and thus the machine must be on). This breaks our general model of “only configure networking with the machine off, to make sure we don’t cross streams between tenants and control plane”. We came to a decent compromise (I think), and agreed to continue in the ironic spec, and revisit the topic in Berlin. Federation: we discussed federation and people seemed interested, however I don’t believe we made any real progress toward getting it done. There’s still a debate whether this should be something in Ironic itself, or if there should just be some sort of proxy layer in front of multiple Ironic environments. To be continued in the spec. Agent polling: we discussed the spec to drop communication from IPA to the conductor. It seems like nobody has major issues with it, and the spec just needs some polishing before landing. L3 deployments: We brought this up, and again there seems to be little contention. I ended up approving the spec shortly after. Neutron event processing: This work has been hanging for years and not getting done. Some folks wondered if we should just poll Neutron, if that gets the work done more quickly. Others wondered if we should even care about it at all (we should). TheJulia is going to follow up with dtantsur and vdrok to see if we can get someone to mainline some caffeine and just get it done. CMDB: Oath and CERN presented their work toward speccing out a CMDB application that can integrate with Ironic. We discussed the problems that they are trying to solve and agreed they need solving. We also agreed that strict schema is better than blobjects (© jaypipes). We agreed it probably doesn’t need to be in Ironic governance, but could be one day. The next steps are to start hacking in a new repo in the OpenStack infrastructure, and propose specs for any Ironic integration that is needed. Red Hat and Dell contributors also showed interest in the project and volunteered to help. Some folks are going to try and talk to the wider OpenStack community to find out if there’s interest or needs from projects like Nova/Neutron/Cinder, etc. Stein goals: We put together a list of goals and voted on them. Julia has since proposed the patch to document them: https://review.openstack.org/#/c/603161/ Last thing Thursday: Cross-project discussions with Nova. Summarized here, but lots of detail in the etherpad under the Ironic section: https://etherpad.openstack.org/p/nova-ptg-stein Power sync: We discussed some problems CERN has with the instance power sync (Rackspace also saw these problems). In short, nova asserts power state if the instance “should” be off but the power is turned on out-of-band. Operators definitely need to be aware of this when doing maintenance on active machines, but we also discussed Ironic calling back to Nova when Ironic knows that the power state has been updated (via Ironic API, etc). I volunteered to look at this, and dansmith volunteered to help out. API heaviness: We discussed how many API calls our virt driver does. As mentioned earlier, I proposed a patch to make the power sync loop more lightweight. There’s also lots of polling for tasks like deploy and rescue, which we can dramatically reduce with a callback from Ironic to Nova. I also volunteered to investigate this, and dansmith again agreed to help. Compute host grouping: Ironic now has a mechanism for grouping conductors to nodes, and we want to mirror that in Nova. We discussed how to take the group as a config option and be able to find the other compute services managing that group, so we can build the hash ring correctly. We concluded that it’s a really hard problem (TM), and agreed to also add a config option like “peer_list” that can be used to list other compute services in the same group. This can be read dynamically each time we build the hash ring, or can be a mutable config with updates triggered by a SIGHUP. We’ll hash out the details in a blueprint or spec. Again, I agreed to begin the work, and dansmith agreed to help. Capabilities filter: This was the last topic. It’s been on the chopping block for ages, but we are just now reaching the point where it can be properly deprecated. We discussed the plan, and mostly agreed it was good enough. johnthetubaguy is going to send the plan wider and make sure it will work for folks. We also discussed modeling countable resources on Ironic resource providers, which will work as long as there is still some resource class with an inventory of one, like we have today. Some folks may investigate doing this, but it’s fuzzy how much people care or if we really need/want to do it. Friday: kind of bummed around the Ironic and TC rooms. Lots of interesting discussions, but nothing I feel like writing about here (as Ironic conversations were things like code deep-dives not worth communicating widely, and the TC topics have been written about to death). // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Wed Sep 19 14:59:27 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 19 Sep 2018 09:59:27 -0500 Subject: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper" In-Reply-To: <1537367848.3306917.1513544104.45E40257@webmail.messagingengine.com> References: <1537367848.3306917.1513544104.45E40257@webmail.messagingengine.com> Message-ID: <66b77394-b780-c3a4-3088-1c060b7c9d44@inaugust.com> On 09/19/2018 09:37 AM, Colleen Murphy wrote: > On Wed, Sep 19, 2018, at 4:23 PM, Monty Taylor wrote: >> On 09/19/2018 08:25 AM, Chris Dent wrote: >>> > >> also, cmurphy has been working on updating some of keystone's legacy >> jobs recently: >> >> https://review.openstack.org/602452 >> >> which might also be a source for copying from. >> > > Disclaimer before anyone blindly copies: https://bit.ly/2vq26SR Bah. Blindly copy all the things!!! From jim at jimrollenhagen.com Wed Sep 19 15:03:40 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 19 Sep 2018 09:03:40 -0600 Subject: [openstack-dev] [ironic][edge] Notes from the PTG In-Reply-To: References: Message-ID: On Wed, Sep 19, 2018 at 8:49 AM, Jim Rollenhagen wrote: > > Tuesday: edge > Since cdent asked in IRC, when we talk about edge and far edge, we defined these roughly like this: https://usercontent.irccloud-cdn.com/file/NunkkS2y/edge_architecture1.JPG // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Sep 19 15:25:46 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 19 Sep 2018 16:25:46 +0100 (BST) Subject: [openstack-dev] Nominating Tetsuro Nakamura for placement-core Message-ID: I'd like to nominate Tetsuro Nakamura for membership in the placement-core team. Throughout placement's development Tetsuro has provided quality reviews; done the hard work of creating rigorous functional tests, making them fail, and fixing them; and implemented some of the complex functionality required at the persistence layer. He's aware of and respects the overarching goals of placement and has demonstrated pragmatism when balancing those goals against the requirements of nova, blazar and other projects. Please follow up with a +1/-1 to express your preference. No need to be an existing placement core, everyone with an interest is welcome. Thanks. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ reenode: cdent tw: @anticdent From jungleboyj at gmail.com Wed Sep 19 15:32:52 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 19 Sep 2018 10:32:52 -0500 Subject: [openstack-dev] [cinder] Berlin Forum Proposals In-Reply-To: <20180919094226.ksvxaucta5bifwpx@localhost> References: <271402d3-d722-d1c9-22cc-39f809428f05@gmail.com> <20180919094226.ksvxaucta5bifwpx@localhost> Message-ID: <9e12ef6a-15eb-02a0-8b0b-813b397f1ce4@gmail.com> Gorka, Oh man!  Sorry for the duplication.  I will update the link on the Forum page if you are able to move your content over.  Think it will confused people less if we use the page I most recently sent out.  Does that make sense? Thanks for catching this mistake! Jay On 9/19/2018 4:42 AM, Gorka Eguileor wrote: > On 18/09, Jay S Bryant wrote: >> Team, >> >> I have created an etherpad for our Forum Topic Planning: >> https://etherpad.openstack.org/p/cinder-berlin-forum-proposals >> >> Please add your ideas to the etherpad.  Thank you! >> >> Jay >> > Hi Jay, > > After our last IRC meeting, a couple of weeks ago, I created an etherpad > [1] and added it to the Forum wiki [2] (though I failed to mention it). > > I had added a possible topic to this etherpad [1], but I can move it to > yours and update the wiki if you like. > > Cheers, > Gorka. > > > [1]: https://etherpad.openstack.org/p/cinder-forum-stein > [2]: https://wiki.openstack.org/wiki/Forum/Berlin2018 From ed at leafe.com Wed Sep 19 15:36:46 2018 From: ed at leafe.com (Ed Leafe) Date: Wed, 19 Sep 2018 10:36:46 -0500 Subject: [openstack-dev] Nominating Tetsuro Nakamura for placement-core In-Reply-To: References: Message-ID: <26DD9BCA-7A2E-44D1-BB30-F0D9E6940FF1@leafe.com> On Sep 19, 2018, at 10:25 AM, Chris Dent wrote: > > I'd like to nominate Tetsuro Nakamura for membership in the > placement-core team. I’m not a core, but if I were, I’d +1 that. -- Ed Leafe From openstack at fried.cc Wed Sep 19 15:37:40 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 19 Sep 2018 10:37:40 -0500 Subject: [openstack-dev] Nominating Tetsuro Nakamura for placement-core In-Reply-To: References: Message-ID: <6faaad78-5a2f-5dd6-2496-fc66ec885d9a@fried.cc> +1 On 09/19/2018 10:25 AM, Chris Dent wrote: > > > I'd like to nominate Tetsuro Nakamura for membership in the > placement-core team. Throughout placement's development Tetsuro has > provided quality reviews; done the hard work of creating rigorous > functional tests, making them fail, and fixing them; and implemented > some of the complex functionality required at the persistence layer. > He's aware of and respects the overarching goals of placement and has > demonstrated pragmatism when balancing those goals against the > requirements of nova, blazar and other projects. > > Please follow up with a +1/-1 to express your preference. No need to > be an existing placement core, everyone with an interest is welcome. > > Thanks. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From geguileo at redhat.com Wed Sep 19 15:43:40 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 19 Sep 2018 17:43:40 +0200 Subject: [openstack-dev] [cinder] Berlin Forum Proposals In-Reply-To: <9e12ef6a-15eb-02a0-8b0b-813b397f1ce4@gmail.com> References: <271402d3-d722-d1c9-22cc-39f809428f05@gmail.com> <20180919094226.ksvxaucta5bifwpx@localhost> <9e12ef6a-15eb-02a0-8b0b-813b397f1ce4@gmail.com> Message-ID: <20180919154340.jbs7u35o2abjarvw@localhost> On 19/09, Jay S Bryant wrote: > Gorka, > > Oh man!  Sorry for the duplication.  I will update the link on the Forum > page if you are able to move your content over.  Think it will confused > people less if we use the page I most recently sent out.  Does that make > sense? > Hi Jay, Yup, it makes sense. I moved the contents and updated the wiki to point to your etherpad. > Thanks for catching this mistake! > It was my mistake for not mentioning the existing etherpad during the PTG... XD Cheers, Gorka. > Jay > > > On 9/19/2018 4:42 AM, Gorka Eguileor wrote: > > On 18/09, Jay S Bryant wrote: > > > Team, > > > > > > I have created an etherpad for our Forum Topic Planning: > > > https://etherpad.openstack.org/p/cinder-berlin-forum-proposals > > > > > > Please add your ideas to the etherpad.  Thank you! > > > > > > Jay > > > > > Hi Jay, > > > > After our last IRC meeting, a couple of weeks ago, I created an etherpad > > [1] and added it to the Forum wiki [2] (though I failed to mention it). > > > > I had added a possible topic to this etherpad [1], but I can move it to > > yours and update the wiki if you like. > > > > Cheers, > > Gorka. > > > > > > [1]: https://etherpad.openstack.org/p/cinder-forum-stein > > [2]: https://wiki.openstack.org/wiki/Forum/Berlin2018 > From jaypipes at gmail.com Wed Sep 19 15:44:16 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 19 Sep 2018 11:44:16 -0400 Subject: [openstack-dev] Nominating Tetsuro Nakamura for placement-core In-Reply-To: References: Message-ID: On 09/19/2018 11:25 AM, Chris Dent wrote: > I'd like to nominate Tetsuro Nakamura for membership in the > placement-core team. Throughout placement's development Tetsuro has > provided quality reviews; done the hard work of creating rigorous > functional tests, making them fail, and fixing them; and implemented > some of the complex functionality required at the persistence layer. > He's aware of and respects the overarching goals of placement and has > demonstrated pragmatism when balancing those goals against the > requirements of nova, blazar and other projects. > > Please follow up with a +1/-1 to express your preference. No need to > be an existing placement core, everyone with an interest is welcome. +1 From jaypipes at gmail.com Wed Sep 19 15:45:04 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 19 Sep 2018 11:45:04 -0400 Subject: [openstack-dev] [ironic][edge] Notes from the PTG In-Reply-To: References: Message-ID: On 09/19/2018 11:03 AM, Jim Rollenhagen wrote: > On Wed, Sep 19, 2018 at 8:49 AM, Jim Rollenhagen > wrote: > > Tuesday: edge > > > Since cdent asked in IRC, when we talk about edge and far edge, we > defined these roughly like this: > https://usercontent.irccloud-cdn.com/file/NunkkS2y/edge_architecture1.JPG Far out, man. -jay From mriedemos at gmail.com Wed Sep 19 16:57:30 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 19 Sep 2018 11:57:30 -0500 Subject: [openstack-dev] [nova] Super fun unshelve image_ref bugs In-Reply-To: References: <5ab19c15-6402-fc79-b0d2-a070e485a082@gmail.com> Message-ID: <9ea56245-52fe-1793-e063-3a49d152322c@gmail.com> On 12/1/2017 2:47 PM, Matt Riedemann wrote: > Andrew Laski also mentioned in IRC that we didn't replace the original > instance.image_ref with the shelved image id because the shelve > operation should be transparent to the end user, they have the same > image (not really), same volumes, same IPs, etc once they unshelve. And > he mentioned that if you rebuild, for example, you'd then rebuild to the > original image instead of the shelved snapshot image. > > I'm not sure how much I agree with that rebuild argument. I understand > it, but I'm not sure I agree with it. I think it's much easier to just > track things for what they are, which means saying if you create a guest > from a given image id, then track that in the instances table, don't lie > about it being something else. Dredging this back up since it will affect cross-cell resize which will rely on shelve/unshelve. I had a thought recently (and noted in https://bugs.launchpad.net/nova/+bug/1732428) that the RequestSpec points at the original image used to create the server, or last rebuild it (if the server was rebuilt with a new image). What if we used that during rebuilds rather than the instance.image_ref? Then unshelve could leave the instance.image_ref pointing at the shelve snapshot image (since that's what is actually backing the server at the time of unshelve and should fix the resize qcow2 bug linked above) but rebuild could still rebuild from the original (or last rebuild) image rather than the shelve snapshot image? The only hiccup I'm aware of is we then still need to *not* delete the snapshot image on unshelve that the instance is pointing at, which means shelve snapshot images could pile up over time, especially with cross-cell resize. Is that a problem? If so, could we have a periodic that cleans up the old snapshot images based on some configured value? -- Thanks, Matt From mriedemos at gmail.com Wed Sep 19 17:00:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 19 Sep 2018 12:00:53 -0500 Subject: [openstack-dev] Nominating Tetsuro Nakamura for placement-core In-Reply-To: References: Message-ID: On 9/19/2018 10:25 AM, Chris Dent wrote: > > > I'd like to nominate Tetsuro Nakamura for membership in the > placement-core team. Throughout placement's development Tetsuro has > provided quality reviews; done the hard work of creating rigorous > functional tests, making them fail, and fixing them; and implemented > some of the complex functionality required at the persistence layer. > He's aware of and respects the overarching goals of placement and has > demonstrated pragmatism when balancing those goals against the > requirements of nova, blazar and other projects. > > Please follow up with a +1/-1 to express your preference. No need to > be an existing placement core, everyone with an interest is welcome. Soft +1 from me given I mostly have defer to those that work more closely with Tetsuro. I agree he's a solid contributor, works hard, finds issues, fixes them before being asked, etc. That's awesome. Reminds me a lot of gibi when we nominated him. -- Thanks, Matt From zbitter at redhat.com Wed Sep 19 17:17:59 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 19 Sep 2018 13:17:59 -0400 Subject: [openstack-dev] [Openstack-sigs] [tc]Global Reachout Proposal In-Reply-To: References: <55c81a61-5670-c1c3-20f1-aa4d8153c8a2@redhat.com> Message-ID: <08783f7e-8201-e6ad-993b-dbb0193de536@redhat.com> On 18/09/18 9:10 PM, Jaesuk Ahn wrote: > On Wed, Sep 19, 2018 at 5:30 AM Zane Bitter > wrote: Resotring the whole quote here because I accidentally sent the original to the -sigs list only and not the -dev list. >> As others have mentioned, I think this is diving into solutions when we haven't defined the problems. I know you mentioned it briefly in the PTG session, but that context never made it to the review or the mailing list. >> >> So AIUI the issue you're trying to solve here is that the TC members seem distant and inaccessible to Chinese contributors because we're not on the same social networks they are? >> >> Perhaps there are others too? >> >> Obvious questions to ask from there would be: >> >> - Whether this is the most important issue facing contributors from the APAC region >> >> - To what extent the proposed solution is expected to help > > > I do agree with Zane on the above point. For the record, I didn't express an opinion. I'm just pointing out what the questions are. > As one of OpenStack participants from Asia region, I will put my > personal opinion. > IRC and ML has been an unified and standard way of communication in > OpenStack Community, and that has been a good way to encourage "open > communication" on a unified method wherever you are from, or whatever > background you have. If the whole community start recognize some other > tools (say WeChat) as recommended alternative communication method > because there are many people there, ironically, it might be a way to > break "diversity" and "openness" we want to embrace. > > Using whatever social media (or tools) in a specific region due to any > reason is not a problem. Anyone is free to use anything. Only thing we > need to make sure is, if you want to communicate officially with the > whole community, there is a very well defined and unified way to do it. > This is currently IRC and ML. Some of Korean dev has difficulties to use > IRC. However, there is not a perfect tool out there in this world, and > we accept all the reason why the community selected IRC as official tool > > But, that being said, There are some things I am facing with IRC from > here in Korea > > As a person from Asia, I do have some of pain points. Because of time > differences, often, I have to do achieve searching since most of > conversations happened while I am sleeping. IRC is not a good tool to > perform "search backlog". Although there is message archive you can dig, > it is still hard. This is a problem. I do love to see any technical > solution for me to efficiently and easily go through irc backlog, like > most of modern chat tools. > > Secondly, IRC is not a popular one even in dev community here in Korea. > In addition, in order to properly use irc, you need to do extra work, > something like setting up bouncing server. I had to do google search to > figure out how to use it. I think part of the disconnect here is that people have different ideas about what IRC (and chat in general) is for. For me it's a way to conduct synchronous conversations. These tend to go badly on the mailing list (really long threads of 1 sentence per message) or on code review (have to keep refreshing), so it's good that we have another tool to do this. I answer a lot of user questions, clarify comments on patches, and obviously join team meetings in IRC. The key part is 'synchronous' though. If I'm not there, the conversation is not going to be synchronous. I don't run a bouncer, although I generally leave my computer running when I'm not working so you'll often (but not always) be able to ping me, and I'll usually look back to see if it was something important. Otherwise it's 50-50 whether I'll even bother to read scrollback, and certainly not for more than a couple of channels. Other people, however, have a completely different perspective: they want a place where they are guaranteed to be reachable at any time (even if they don't see it until later) and the entire record is always right there. I think Slack was built for those kinds of people. You would have to drag me kicking and screaming into Slack even if it weren't proprietary software. I don't know where WeChat falls on that spectrum. But maybe part of the issue is that we're creating too high an expectation of what it means to participate in the community (e.g. if you're not going to set up a bouncer and be reachable 24/7 then you might as well not get involved at all - this is 100% untrue). I've seen several assertions, including in the review, that any decisions must be documented on the mailing list or IRC, and I'm not sure I agree. IMHO, any decisions should be documented on the mailing list, period. I'd love to see more participation on the mailing list. Since it is asynchronous already it's somewhat friendlier to those in APAC time zones (although there are still issues, real or perceived, with decisions being reached before anyone on that side of the world has a chance to weigh in), and a lot easier than carrying on a conversation in real time for those who don't speak English natively. And while can still be technical challenges with mailing lists, almost every company allows email through their corporate firewall. AIUI though, augmenting IRC was not the point of the proposal. Rather, I think it was for TC members to 'fly the flag' in WeChat to be more visible and available to the portion of the community that is there. > In that sense, It would be great to have > OpenStack community provided, simplified and well-written, written in > multiple language, IRC guide docs. Alternatively, if OpenStack community > can provide a good web-based irc client tool, that would be fantastic. I haven't tried it but: https://webchat.freenode.net/ > As I described the above, we can certainly have a healthy discussion on > what different and real problems we are facing from Asia. > However, I don't think this TC resolution is good way to do that. > > Cheers, > -- > > Jaesuk Ahn, Team Lead > Virtualization SW Lab, SW R&D Center > > SK Telecom > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > From mrhillsman at gmail.com Wed Sep 19 17:33:33 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 19 Sep 2018 12:33:33 -0500 Subject: [openstack-dev] [Openstack-sigs] [tc]Global Reachout Proposal In-Reply-To: <08783f7e-8201-e6ad-993b-dbb0193de536@redhat.com> References: <55c81a61-5670-c1c3-20f1-aa4d8153c8a2@redhat.com> <08783f7e-8201-e6ad-993b-dbb0193de536@redhat.com> Message-ID: Regarding some web clients that are potentially useful https://webchat.freenode.net/ - Zane mentioned this already and I can say I tried/used it some time ago until I opted for CLI/alternatives https://riot.im (iOS and Android apps available along with online client) - i find it a bit sluggish at times, others have not, either way it is a decent alternative https://thelounge.chat/ - have not tried it yet but looks promising especially self-hosted option https://irccloud.com - what I currently use, I do believe it can be blocked, i am looking into riot and thelounge tbh On Wed, Sep 19, 2018 at 12:18 PM Zane Bitter wrote: > On 18/09/18 9:10 PM, Jaesuk Ahn wrote: > > On Wed, Sep 19, 2018 at 5:30 AM Zane Bitter > > wrote: > > Resotring the whole quote here because I accidentally sent the original > to the -sigs list only and not the -dev list. > > >> As others have mentioned, I think this is diving into solutions when we > haven't defined the problems. I know you mentioned it briefly in the PTG > session, but that context never made it to the review or the mailing list. > >> > >> So AIUI the issue you're trying to solve here is that the TC members > seem distant and inaccessible to Chinese contributors because we're not on > the same social networks they are? > >> > >> Perhaps there are others too? > >> > >> Obvious questions to ask from there would be: > >> > >> - Whether this is the most important issue facing contributors from the > APAC region > >> > >> - To what extent the proposed solution is expected to help > > > > > > I do agree with Zane on the above point. > > For the record, I didn't express an opinion. I'm just pointing out what > the questions are. > > > As one of OpenStack participants from Asia region, I will put my > > personal opinion. > > IRC and ML has been an unified and standard way of communication in > > OpenStack Community, and that has been a good way to encourage "open > > communication" on a unified method wherever you are from, or whatever > > background you have. If the whole community start recognize some other > > tools (say WeChat) as recommended alternative communication method > > because there are many people there, ironically, it might be a way to > > break "diversity" and "openness" we want to embrace. > > > > Using whatever social media (or tools) in a specific region due to any > > reason is not a problem. Anyone is free to use anything. Only thing we > > need to make sure is, if you want to communicate officially with the > > whole community, there is a very well defined and unified way to do it. > > This is currently IRC and ML. Some of Korean dev has difficulties to use > > IRC. However, there is not a perfect tool out there in this world, and > > we accept all the reason why the community selected IRC as official tool > > > > But, that being said, There are some things I am facing with IRC from > > here in Korea > > > > As a person from Asia, I do have some of pain points. Because of time > > differences, often, I have to do achieve searching since most of > > conversations happened while I am sleeping. IRC is not a good tool to > > perform "search backlog". Although there is message archive you can dig, > > it is still hard. This is a problem. I do love to see any technical > > solution for me to efficiently and easily go through irc backlog, like > > most of modern chat tools. > > > > Secondly, IRC is not a popular one even in dev community here in Korea. > > In addition, in order to properly use irc, you need to do extra work, > > something like setting up bouncing server. I had to do google search to > > figure out how to use it. > > I think part of the disconnect here is that people have different ideas > about what IRC (and chat in general) is for. > > For me it's a way to conduct synchronous conversations. These tend to go > badly on the mailing list (really long threads of 1 sentence per > message) or on code review (have to keep refreshing), so it's good that > we have another tool to do this. I answer a lot of user questions, > clarify comments on patches, and obviously join team meetings in IRC. > > The key part is 'synchronous' though. If I'm not there, the conversation > is not going to be synchronous. I don't run a bouncer, although I > generally leave my computer running when I'm not working so you'll often > (but not always) be able to ping me, and I'll usually look back to see > if it was something important. Otherwise it's 50-50 whether I'll even > bother to read scrollback, and certainly not for more than a couple of > channels. > > Other people, however, have a completely different perspective: they > want a place where they are guaranteed to be reachable at any time (even > if they don't see it until later) and the entire record is always right > there. I think Slack was built for those kinds of people. You would have > to drag me kicking and screaming into Slack even if it weren't > proprietary software. > > I don't know where WeChat falls on that spectrum. But maybe part of the > issue is that we're creating too high an expectation of what it means to > participate in the community (e.g. if you're not going to set up a > bouncer and be reachable 24/7 then you might as well not get involved at > all - this is 100% untrue). I've seen several assertions, including in > the review, that any decisions must be documented on the mailing list or > IRC, and I'm not sure I agree. IMHO, any decisions should be documented > on the mailing list, period. > > I'd love to see more participation on the mailing list. Since it is > asynchronous already it's somewhat friendlier to those in APAC time > zones (although there are still issues, real or perceived, with > decisions being reached before anyone on that side of the world has a > chance to weigh in), and a lot easier than carrying on a conversation in > real time for those who don't speak English natively. And while can > still be technical challenges with mailing lists, almost every company > allows email through their corporate firewall. > > AIUI though, augmenting IRC was not the point of the proposal. Rather, I > think it was for TC members to 'fly the flag' in WeChat to be more > visible and available to the portion of the community that is there. > > > In that sense, It would be great to have > > OpenStack community provided, simplified and well-written, written in > > multiple language, IRC guide docs. Alternatively, if OpenStack community > > can provide a good web-based irc client tool, that would be fantastic. > > I haven't tried it but: https://webchat.freenode.net/ > > > As I described the above, we can certainly have a healthy discussion on > > what different and real problems we are facing from Asia. > > However, I don't think this TC resolution is good way to do that. > > > > Cheers, > > -- > > > > Jaesuk Ahn, Team Lead > > Virtualization SW Lab, SW R&D Center > > > > SK Telecom > > > > > > _______________________________________________ > > openstack-sigs mailing list > > openstack-sigs at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Wed Sep 19 17:35:02 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 20 Sep 2018 02:35:02 +0900 Subject: [openstack-dev] [Neutron] Removing external_bridge_name config option In-Reply-To: <30362473-35B0-499F-BEED-E219BC4FFA07@redhat.com> References: <30362473-35B0-499F-BEED-E219BC4FFA07@redhat.com> Message-ID: Hi, I would like to share some information to help the migration from external_network_bridge. The background of the deprecation is described in https://bugs.launchpad.net/neutron/+bug/1491668 I also shared a slide to explain the detail. https://www.slideshare.net/ritchey98/neutron-brex-is-now-deprecated-what-is-modern-way Neutron: br-ex is now deprecated! what is modern way? I hope these help you to push away the usage of external_network_bridge. Thanks, Akihiro Motoki (IRC: amotoki) 2018年9月19日(水) 23:02 Slawomir Kaplonski : > > Hi, > > Some time ago I proposed patch [1] to remove config option „external_network_bridge”. > This option was deprecated to removal in Ocata so I think it’s time to get rid of it finally. > > There is quite many projects which still uses this option [2]. I will try to propose patches for those projects to remove it also from there but if You are maintainer of such project, it would be great if You can remove it. If You would do it, please use same topic as is in [1] - it will allow me easier track which projects already removed it. > Thx a lot in advance for any help :) > > [1] https://review.openstack.org/#/c/567369 > [2] http://codesearch.openstack.org/?q=external_network_bridge&i=nope&files=&repos= > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lbragstad at gmail.com Wed Sep 19 18:10:05 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 19 Sep 2018 13:10:05 -0500 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: Message-ID: johnsom (from octavia) had a good idea, which was to use the service types that are defined already [0]. I like this for three reasons, specifically. First, it's already a known convention for services that we can just reuse. Second, it includes a spacing convention (e.g. load-balancer vs load_balancer). Third, it's relatively short since it doesn't include "os" or "api". So long as there isn't any objection to that, we can start figuring out how we want to do the method and resource parts. I pulled some policies into a place where I could try and query them for specific patterns and existing usage [1]. With the representation that I have (nova, neutron, glance, cinder, keystone, mistral, and octavia): - *create* is favored over post (105 occurrences to 7) - *list* is favored over get_all (74 occurrences to 28) - *update* is favored over put/patch (91 occurrences to 10) >From this perspective, using the HTTP method might be slightly redundant for projects using the DocumentedRuleDefault object from oslo.policy since it contains the URL and method for invoking the policy. It also might differ depending on the service implementing the API (some might use put instead of patch to update a resource). Conversely, using the HTTP method in the policy name itself doesn't require use of DocumentedRuleDefault, although its usage is still recommended. Thoughts on using create, list, update, and delete as opposed to post, get, put, patch, and delete in the naming convention? [0] https://service-types.openstack.org/service-types.json [1] https://gist.github.com/lbragstad/5000b46f27342589701371c88262c35b#file-policy-names-yaml On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad wrote: > If we consider dropping "os", should we entertain dropping "api", too? Do > we have a good reason to keep "api"? > > I wouldn't be opposed to simple service types (e.g "compute" or > "loadbalancer"). > > On Sat, Sep 15, 2018 at 9:01 AM Morgan Fainberg > wrote: > >> I am generally opposed to needlessly prefixing things with "os". >> >> I would advocate to drop it. >> >> >> On Fri, Sep 14, 2018, 20:17 Lance Bragstad wrote: >> >>> Ok - yeah, I'm not sure what the history behind that is either... >>> >>> I'm mainly curious if that's something we can/should keep or if we are >>> opposed to dropping 'os' and 'api' from the convention (e.g. >>> load-balancer:loadbalancer:post as opposed to >>> os_load-balancer_api:loadbalancer:post) and just sticking with the >>> service-type? >>> >>> On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson >>> wrote: >>> >>>> I don't know for sure, but I assume it is short for "OpenStack" and >>>> prefixing OpenStack policies vs. third party plugin policies for >>>> documentation purposes. >>>> >>>> I am guilty of borrowing this from existing code examples[0]. >>>> >>>> [0] >>>> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html >>>> >>>> Michael >>>> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad >>>> wrote: >>>> > >>>> > >>>> > >>>> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson >>>> wrote: >>>> >> >>>> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post" >>>> >> which maps to the "os--api::" format. >>>> > >>>> > >>>> > Thanks for explaining the justification, Michael. >>>> > >>>> > I'm curious if anyone has context on the "os-" part of the format? >>>> I've seen that pattern in a couple different projects. Does anyone know >>>> about its origin? Was it something we converted to our policy names because >>>> of API names/paths? >>>> > >>>> >> >>>> >> >>>> >> I selected it as it uses the service-type[1], references the API >>>> >> resource, and then the method. So it maps well to the API >>>> reference[2] >>>> >> for the service. >>>> >> >>>> >> [0] >>>> https://docs.openstack.org/octavia/latest/configuration/policy.html >>>> >> [1] https://service-types.openstack.org/ >>>> >> [2] >>>> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer >>>> >> >>>> >> Michael >>>> >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell wrote: >>>> >> > >>>> >> > So +1 >>>> >> > >>>> >> > >>>> >> > >>>> >> > Tim >>>> >> > >>>> >> > >>>> >> > >>>> >> > From: Lance Bragstad >>>> >> > Reply-To: "OpenStack Development Mailing List (not for usage >>>> questions)" >>>> >> > Date: Wednesday, 12 September 2018 at 20:43 >>>> >> > To: "OpenStack Development Mailing List (not for usage questions)" >>>> , OpenStack Operators < >>>> openstack-operators at lists.openstack.org> >>>> >> > Subject: [openstack-dev] [all] Consistent policy names >>>> >> > >>>> >> > >>>> >> > >>>> >> > The topic of having consistent policy names has popped up a few >>>> times this week. Ultimately, if we are to move forward with this, we'll >>>> need a convention. To help with that a little bit I started an etherpad [0] >>>> that includes links to policy references, basic conventions *within* that >>>> service, and some examples of each. I got through quite a few projects this >>>> morning, but there are still a couple left. >>>> >> > >>>> >> > >>>> >> > >>>> >> > The idea is to look at what we do today and see what conventions >>>> we can come up with to move towards, which should also help us determine >>>> how much each convention is going to impact services (e.g. picking a >>>> convention that will cause 70% of services to rename policies). >>>> >> > >>>> >> > >>>> >> > >>>> >> > Please have a look and we can discuss conventions in this thread. >>>> If we come to agreement, I'll start working on some documentation in >>>> oslo.policy so that it's somewhat official because starting to renaming >>>> policies. >>>> >> > >>>> >> > >>>> >> > >>>> >> > [0] https://etherpad.openstack.org/p/consistent-policy-names >>>> >> > >>>> >> > _______________________________________________ >>>> >> > OpenStack-operators mailing list >>>> >> > OpenStack-operators at lists.openstack.org >>>> >> > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>> >> >>>> >> >>>> __________________________________________________________________________ >>>> >> OpenStack Development Mailing List (not for usage questions) >>>> >> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> > >>>> > _______________________________________________ >>>> > OpenStack-operators mailing list >>>> > OpenStack-operators at lists.openstack.org >>>> > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duc.openstack at gmail.com Wed Sep 19 18:14:23 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Wed, 19 Sep 2018 11:14:23 -0700 Subject: [openstack-dev] [senlin] Senlin Monthly(ish) Newsletter Sep 2018 Message-ID: HTML: https://dkt26111.wordpress.com/2018/09/19/senlin-monthlyish-newsletter-september-2018/ This is the inaugural Senlin monthly(ish) newsletter. The goal of the newsletter is to highlight happenings in the Senlin project. If you have any feedback or questions regarding the contents, please feel free to reach out to me or anyone in the #senlin IRC channel. News ---- * Senlin weekly meeting time was changed at the beginning of the Stein cycle to 5:30 UTC every Friday. Feel free to drop in. * Two new core members were added to the Senlin project. Welcome jucross and eandersson. * One new stable reviewer was added for Senlin stable maintainance. Welcome chenyb4. * Autoscaling forum is being proposed for the Berlin Summit (http://lists.openstack.org/pipermail/openstack-dev/2018-September/134770.html). Add your comments/feedback to this etherpad: https://etherpad.openstack.org/p/autoscaling-integration-and-feedback Blueprint Status ---------------- * Fail fast locked resource - https://blueprints.launchpad.net/senlin/+spec/fail-fast-locked-resource - Spec was approved and implementation is WIP. * Multiple detection modes - https://blueprints.launchpad.net/senlin/+spec/multiple-detection-modes - Spec approval is pending (https://review.openstack.org/#/c/601471/). * Fail-fast on cooldown for scaling operations - Waiting for blueprint/spec submission. * OpenStackSDK support senlin function test - Waiting for blueprint submission. * Senlin add support use limit return - Waiting for blueprint submission. * Add zun driver in senlin, use zun manager container - Waiting for blueprint submission. Community Goal Status --------------------- * Python 3 - All patches by Python 3 goal champions for zuul migration, documentation and unit test changes have been merged. * Upgrade Checkers - No work has started on this. If you like to help out with this task, please let me know. Recently Merged Changes ----------------------- * Bug# 1777774 was fixed (https://review.openstack.org/#/c/594643/) * Improvements to node poll URL mode in health policy (https://review.openstack.org/#/c/588674/) From gkotton at vmware.com Wed Sep 19 18:19:44 2018 From: gkotton at vmware.com (Gary Kotton) Date: Wed, 19 Sep 2018 18:19:44 +0000 Subject: [openstack-dev] [neutron] Core status Message-ID: Hi, I have recently transitioned to a new role where I will be working on other parts of OpenStack. Sadly I do not have the necessary cycles to maintain my core responsibilities in the neutron community. Nonetheless I will continue to be involved. Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Sep 19 18:25:52 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 19 Sep 2018 12:25:52 -0600 Subject: [openstack-dev] [User-committee] [tc] Joint UC/TC Meeting In-Reply-To: References: <1537311496-sup-7778@lrrr.local> Message-ID: <1537381338-sup-300@lrrr.local> Excerpts from Chris Dent's message of 2018-09-19 12:31:26 +0100: > On Tue, 18 Sep 2018, Doug Hellmann wrote: > > > [Redirecting this from the openstack-tc list to the -dev list.] > > Excerpts from Melvin Hillsman's message of 2018-09-18 17:43:57 -0500: > >> UC is proposing a joint UC/TC meeting at the end of the month say starting > >> after Berlin to work more closely together. The last Monday of the month at > >> 1pm US Central time is current proposal, throwing it out here now for > >> feedback/discussion, so that would make the first one Monday, November > >> 26th, 2018. > > I agree that the UC and TC should work more closely together. If the > best way to do that is to have a meeting then great, let's do it. > We're you thinking IRC or something else? > > But we probably need to resolve our ambivalence towards meetings. On > Sunday at the PTG we discussed maybe going back to having a TC > meeting but didn't realy decide (at least as far as I recall) and My notes say that the chair needs to raise this after the election is over, so that the newly elected TC members can have input into the decision. > didn't discuss in too much depth the reasons why we killed meetings > in the first place. How would this meeting be different? I definitely see the usefulness of more regular communication between the two groups. As Chris points out, we've been trying to avoid requiring formal synchronous meetings as much as possible, I'd like to start by listing some of the topics we might discuss. Not all of them will need to wait for a meeting, and some may be better suited to an in-person meeting at the forum in Berlin. If we come up with a list of topics that do make sense for an online discussion, we can work out how best to handle that. Doug From German.Eichberger at rackspace.com Wed Sep 19 18:29:44 2018 From: German.Eichberger at rackspace.com (German Eichberger) Date: Wed, 19 Sep 2018 18:29:44 +0000 Subject: [openstack-dev] [Openstack-operators][neutron][fwaas] Removing FWaaS V1 in Stein Message-ID: All, With the Stein release we will remove support for FWaaS V1 [1]. It has been marked deprecated since Liberty (2015) and was an experimental API. It is being replaced with FWaaS V2 [2] which has been available since the Newton release. What is Neutron FWaaS? Firewall-as-a-Service is a neutron project which provides router (L3) and port (L2) firewalls to protect networks and vms. [3] What is Neutron FWaaS V1? FWaaS V1 was the first implementation of Firewall-as-a-Service and focused on the router port. This implementation has been ported to FWaaS V2. What is FWaaS V2? FWaaS V2 extends Firewall-as-a-Service to any neutron port - thus offering the same functionality as Security Groups but with a richer API (e.g. deny/reject traffic). Why is FWaaS V1 being removed? FWaaS V1 has been deprecated since 2015 and with FWaaS V2 being released for several cycles it is time to remove FWaaS V1. How do I migrate? Existing firewall policies and rules need to be recreated with FWaaS V2. At this point we don’t offer an automated migration tool. [1] https://developer.openstack.org/api-ref/network/v2/#fwaas-v1-0-deprecated-fw-firewalls-firewall-policies-firewall-rules [2] https://developer.openstack.org/api-ref/network/v2/#fwaas-v2-0-current-fwaas-firewall-groups-firewall-policies-firewall-rules [3] https://www.youtube.com/watch?v=9Wkym4BeM4M -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Wed Sep 19 18:50:22 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 19 Sep 2018 11:50:22 -0700 Subject: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core Message-ID: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> Hi all, Based on our PTG discussion, I'd like to nominate Ian Y. Choi for membership in the openstack-doc-core team. I think Ian doesn't need an introduction, he's been around for a while, recently being deeply involved in infra work to get us robust support for project team docs translation and PDF builds. Having Ian on the core team will also strengthen our integration with the i18n community. Please let the ML know should you have any objections. Thanks, pk From aj at suse.com Wed Sep 19 18:54:38 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 19 Sep 2018 20:54:38 +0200 Subject: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core In-Reply-To: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> References: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> Message-ID: <4f413d36-463e-477a-9886-79bf55df677c@suse.com> On 2018-09-19 20:50, Petr Kovar wrote: > Hi all, > > Based on our PTG discussion, I'd like to nominate Ian Y. Choi for > membership in the openstack-doc-core team. I think Ian doesn't need an > introduction, he's been around for a while, recently being deeply involved > in infra work to get us robust support for project team docs translation and > PDF builds. > > Having Ian on the core team will also strengthen our integration with > the i18n community. > > Please let the ML know should you have any objections. The opposite ;), heartly agree with adding him, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From cboylan at sapwetik.org Wed Sep 19 19:11:38 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 19 Sep 2018 12:11:38 -0700 Subject: [openstack-dev] [all] Zuul job backlog Message-ID: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> Hello everyone, You may have noticed there is a large Zuul job backlog and changes are not getting CI reports as quickly as you might expect. There are several factors interacting with each other to make this the case. The short version is that one of our clouds is performing upgrades and has been removed from service, and we have a large number of gate failures which cause things to reset and start over. We have fewer resources than normal and are using them inefficiently. Zuul is operating as expected. Continue reading if you'd like to understand the technical details and find out how you can help make this better. Zuul gates related projects in shared queues. Changes enter these queues and are ordered in a speculative future state that Zuul assumes will pass because multiple humans have reviewed the changes and said they are good (also they had to pass check testing first). Problems arise when tests fail forcing Zuul to evict changes from the speculative future state, build a new state, then start jobs over again for this new future. Typically this doesn't happen often and we merge many changes at a time, quickly pushing code into our repos. Unfortunately, the results are painful when we fail often as we end up rebuilding future states and restarting jobs often. Currently we have the gate and release jobs set to the highest priority as well so they run jobs before other queues. This means the gate can starve other work if it is flaky. We've configured things this way because the gate is not supposed to be flaky since we've reviewed things and already passed check testing. One of the tools we have in place to make this less painful is each gate queue operates on a window that grows and shrinks similar to how TCP slowstart. As changes merge we increase the size of the window and when they fail to merge we decrease it. This reduces the size of the future state that must be rebuilt and retested on failure when things are persistently flaky. The best way to make this better is to fix the bugs in our software, whether that is in the CI system itself or the software being tested. The first step in doing that is to identify and track the bugs that we are dealing with. We have a tool called elastic-recheck that does this using indexed logs from the jobs. The idea there is to go through the list of unclassified failures [0] and fingerprint them so that we can track them [1]. With that data available we can then prioritize fixing the bugs that have the biggest impact. Unfortunately, right now our classification rate is very poor (only 15%), which makes it difficult to know what exactly is causing these failures. Mriedem and I have quickly scanned the unclassified list, and it appears there is a db migration testing issue causing these tests to timeout across several projects. Mriedem is working to get this classified and tracked which should help, but we will also need to fix the bug. On top of that it appears that Glance has flaky functional tests (both python2 and python3) which are causing resets and should be looked into. If you'd like to help, let mriedem or myself know and we'll gladly work with you to get elasticsearch queries added to elastic-recheck. We are likely less help when it comes to fixing functional tests in Glance, but I'm happy to point people in the right direction for that as much as I can. If you can take a few minutes to do this before/after you issue a recheck it does help quite a bit. One general thing I've found would be helpful is if projects can clean up the deprecation warnings in their log outputs. The persistent "WARNING you used the old name for a thing" messages make the logs large and much harder to read to find the actual failures. As a final note this is largely targeted at the OpenStack Integrated gate (Nova, Glance, Cinder, Keystone, Swift, Neutron) since that appears to be particularly flaky at the moment. The Zuul behavior applies to other gate pipelines (OSA, Tripleo, Airship, etc) as does elastic-recheck and related tooling. If you find your particular pipeline is flaky I'm more than happy to help in that context as well. [0] http://status.openstack.org/elastic-recheck/data/integrated_gate.html [1] http://status.openstack.org/elastic-recheck/gate.html Thank you, Clark From jungleboyj at gmail.com Wed Sep 19 19:24:49 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 19 Sep 2018 14:24:49 -0500 Subject: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core In-Reply-To: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> References: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> Message-ID: <83015120-6adb-e11b-3d40-d2dd57b773c8@gmail.com> On 9/19/2018 1:50 PM, Petr Kovar wrote: > Hi all, > > Based on our PTG discussion, I'd like to nominate Ian Y. Choi for > membership in the openstack-doc-core team. I think Ian doesn't need an > introduction, he's been around for a while, recently being deeply involved > in infra work to get us robust support for project team docs translation and > PDF builds. > > Having Ian on the core team will also strengthen our integration with > the i18n community. > > Please let the ML know should you have any objections. Petr, Not a doc Core but wanted to add my support.  Agree he would be a great addition.  Appreciate all he does for i18n, docs and OpenStack! Jay > Thanks, > pk > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From nate.johnston at redhat.com Wed Sep 19 19:31:51 2018 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 19 Sep 2018 15:31:51 -0400 Subject: [openstack-dev] [neutron] Core status In-Reply-To: References: Message-ID: <20180919193151.oq6rdivmuzue4ghu@bishop> On Wed, Sep 19, 2018 at 06:19:44PM +0000, Gary Kotton wrote: > I have recently transitioned to a new role where I will be working on other parts of OpenStack. Sadly I do not have the necessary cycles to maintain my core responsibilities in the neutron community. Nonetheless I will continue to be involved. Thanks for everything you've done over the years, Gary. I know I learned a lot from your reviews back when I was a wee baby Neutron developer. Best of luck on what's next! Nate From johnsomor at gmail.com Wed Sep 19 19:44:54 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 19 Sep 2018 12:44:54 -0700 Subject: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core In-Reply-To: <83015120-6adb-e11b-3d40-d2dd57b773c8@gmail.com> References: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> <83015120-6adb-e11b-3d40-d2dd57b773c8@gmail.com> Message-ID: Also not a docs core, but fully support this nomination! Michael On Wed, Sep 19, 2018 at 12:25 PM Jay S Bryant wrote: > > > > On 9/19/2018 1:50 PM, Petr Kovar wrote: > > Hi all, > > > > Based on our PTG discussion, I'd like to nominate Ian Y. Choi for > > membership in the openstack-doc-core team. I think Ian doesn't need an > > introduction, he's been around for a while, recently being deeply involved > > in infra work to get us robust support for project team docs translation and > > PDF builds. > > > > Having Ian on the core team will also strengthen our integration with > > the i18n community. > > > > Please let the ML know should you have any objections. > Petr, > > Not a doc Core but wanted to add my support. Agree he would be a great > addition. Appreciate all he does for i18n, docs and OpenStack! > > Jay > > > Thanks, > > pk > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Wed Sep 19 19:45:51 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 19 Sep 2018 14:45:51 -0500 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> Message-ID: On 9/19/2018 2:11 PM, Clark Boylan wrote: > Unfortunately, right now our classification rate is very poor (only 15%), which makes it difficult to know what exactly is causing these failures. Mriedem and I have quickly scanned the unclassified list, and it appears there is a db migration testing issue causing these tests to timeout across several projects. Mriedem is working to get this classified and tracked which should help, but we will also need to fix the bug. On top of that it appears that Glance has flaky functional tests (both python2 and python3) which are causing resets and should be looked into. > > If you'd like to help, let mriedem or myself know and we'll gladly work with you to get elasticsearch queries added to elastic-recheck. We are likely less help when it comes to fixing functional tests in Glance, but I'm happy to point people in the right direction for that as much as I can. If you can take a few minutes to do this before/after you issue a recheck it does help quite a bit. Things have gotten bad enough that I've started proposing changes to skip particularly high failure rate tests that are not otherwise getting attention to help triage and fix the bugs. For example: https://review.openstack.org/#/c/602649/ https://review.openstack.org/#/c/602656/ Generally this is a last resort since it means we're losing test coverage, but when we hit a critical mass of random failures it becomes extremely difficult to merge code. Another one we need to make a decision on is: https://bugs.launchpad.net/tempest/+bug/1783405 Which I'm suggesting we need to mark more slow tests with the actual "slow" tag in Tempest so they move to only be run in the tempest-slow job. gmann and I talked about this last week over IRC but I forgot to update the bug report with details. I think rather than increase the timeout of the tempest-full job we should be marking more slow tests as slow. Increasing timeouts gives some short-term relief but eventually we just have to look at these issues again, and a tempest run shouldn't take over 2 hours (remember when it used to take ~45 minutes?). -- Thanks, Matt From doug at doughellmann.com Wed Sep 19 19:50:38 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 19 Sep 2018 13:50:38 -0600 Subject: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core In-Reply-To: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> References: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> Message-ID: <1537386625-sup-9613@lrrr.local> Excerpts from Petr Kovar's message of 2018-09-19 11:50:22 -0700: > Hi all, > > Based on our PTG discussion, I'd like to nominate Ian Y. Choi for > membership in the openstack-doc-core team. I think Ian doesn't need an > introduction, he's been around for a while, recently being deeply involved > in infra work to get us robust support for project team docs translation and > PDF builds. > > Having Ian on the core team will also strengthen our integration with > the i18n community. > > Please let the ML know should you have any objections. > > Thanks, > pk > +1 From mriedemos at gmail.com Wed Sep 19 20:06:28 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 19 Sep 2018 15:06:28 -0500 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> Message-ID: <19e5084c-3cdd-e21f-b1f7-bf442fba0026@gmail.com> On 9/19/2018 2:45 PM, Matt Riedemann wrote: > Another one we need to make a decision on is: > > https://bugs.launchpad.net/tempest/+bug/1783405 > > Which I'm suggesting we need to mark more slow tests with the actual > "slow" tag in Tempest so they move to only be run in the tempest-slow > job. gmann and I talked about this last week over IRC but I forgot to > update the bug report with details. I think rather than increase the > timeout of the tempest-full job we should be marking more slow tests as > slow. Increasing timeouts gives some short-term relief but eventually we > just have to look at these issues again, and a tempest run shouldn't > take over 2 hours (remember when it used to take ~45 minutes?). https://review.openstack.org/#/c/603900/ -- Thanks, Matt From zhipengh512 at gmail.com Wed Sep 19 23:15:51 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 20 Sep 2018 07:15:51 +0800 Subject: [openstack-dev] [Openstack-sigs] [tc]Global Reachout Proposal In-Reply-To: References: <55c81a61-5670-c1c3-20f1-aa4d8153c8a2@redhat.com> <08783f7e-8201-e6ad-993b-dbb0193de536@redhat.com> Message-ID: A quick sidenote for anyone is using riot.im but not super familiar with it like I did ... Its global search could not cache all the openstack channels, therefore you have to manually join them via their own cmd lines as I recently discover the methods .. 1. add @appservice-irc:matrix.org for friend 2. type in the console : !join chat.freenode.net #openstack-xxx For registration, it is more wacky but I found it on google anyways: 1. add @appservice-irc:matrix.org for friend 2. type in the console : !storepass chat.freenode.net PASSWORD 3. add NickServ(IRC) for friend 4. type in the console : identify NICK PASSWORD viola .... On Thu, Sep 20, 2018 at 1:34 AM Melvin Hillsman wrote: > Regarding some web clients that are potentially useful > > https://webchat.freenode.net/ > - Zane mentioned this already and I can say I tried/used it some time > ago until I opted for CLI/alternatives > https://riot.im (iOS and Android apps available along with online client) > - i find it a bit sluggish at times, others have not, either way it is a > decent alternative > https://thelounge.chat/ > - have not tried it yet but looks promising especially self-hosted option > https://irccloud.com > - what I currently use, I do believe it can be blocked, i am looking > into riot and thelounge tbh > > > On Wed, Sep 19, 2018 at 12:18 PM Zane Bitter wrote: > >> On 18/09/18 9:10 PM, Jaesuk Ahn wrote: >> > On Wed, Sep 19, 2018 at 5:30 AM Zane Bitter > > > wrote: >> >> Resotring the whole quote here because I accidentally sent the original >> to the -sigs list only and not the -dev list. >> >> >> As others have mentioned, I think this is diving into solutions when >> we haven't defined the problems. I know you mentioned it briefly in the PTG >> session, but that context never made it to the review or the mailing list. >> >> >> >> So AIUI the issue you're trying to solve here is that the TC members >> seem distant and inaccessible to Chinese contributors because we're not on >> the same social networks they are? >> >> >> >> Perhaps there are others too? >> >> >> >> Obvious questions to ask from there would be: >> >> >> >> - Whether this is the most important issue facing contributors from >> the APAC region >> >> >> >> - To what extent the proposed solution is expected to help >> > >> > >> > I do agree with Zane on the above point. >> >> For the record, I didn't express an opinion. I'm just pointing out what >> the questions are. >> >> > As one of OpenStack participants from Asia region, I will put my >> > personal opinion. >> > IRC and ML has been an unified and standard way of communication in >> > OpenStack Community, and that has been a good way to encourage "open >> > communication" on a unified method wherever you are from, or whatever >> > background you have. If the whole community start recognize some other >> > tools (say WeChat) as recommended alternative communication method >> > because there are many people there, ironically, it might be a way to >> > break "diversity" and "openness" we want to embrace. >> > >> > Using whatever social media (or tools) in a specific region due to any >> > reason is not a problem. Anyone is free to use anything. Only thing we >> > need to make sure is, if you want to communicate officially with the >> > whole community, there is a very well defined and unified way to do it. >> > This is currently IRC and ML. Some of Korean dev has difficulties to >> use >> > IRC. However, there is not a perfect tool out there in this world, and >> > we accept all the reason why the community selected IRC as official tool >> > >> > But, that being said, There are some things I am facing with IRC from >> > here in Korea >> > >> > As a person from Asia, I do have some of pain points. Because of time >> > differences, often, I have to do achieve searching since most of >> > conversations happened while I am sleeping. IRC is not a good tool to >> > perform "search backlog". Although there is message archive you can >> dig, >> > it is still hard. This is a problem. I do love to see any technical >> > solution for me to efficiently and easily go through irc backlog, like >> > most of modern chat tools. >> > >> > Secondly, IRC is not a popular one even in dev community here in Korea. >> > In addition, in order to properly use irc, you need to do extra work, >> > something like setting up bouncing server. I had to do google search to >> > figure out how to use it. >> >> I think part of the disconnect here is that people have different ideas >> about what IRC (and chat in general) is for. >> >> For me it's a way to conduct synchronous conversations. These tend to go >> badly on the mailing list (really long threads of 1 sentence per >> message) or on code review (have to keep refreshing), so it's good that >> we have another tool to do this. I answer a lot of user questions, >> clarify comments on patches, and obviously join team meetings in IRC. >> >> The key part is 'synchronous' though. If I'm not there, the conversation >> is not going to be synchronous. I don't run a bouncer, although I >> generally leave my computer running when I'm not working so you'll often >> (but not always) be able to ping me, and I'll usually look back to see >> if it was something important. Otherwise it's 50-50 whether I'll even >> bother to read scrollback, and certainly not for more than a couple of >> channels. >> >> Other people, however, have a completely different perspective: they >> want a place where they are guaranteed to be reachable at any time (even >> if they don't see it until later) and the entire record is always right >> there. I think Slack was built for those kinds of people. You would have >> to drag me kicking and screaming into Slack even if it weren't >> proprietary software. >> >> I don't know where WeChat falls on that spectrum. But maybe part of the >> issue is that we're creating too high an expectation of what it means to >> participate in the community (e.g. if you're not going to set up a >> bouncer and be reachable 24/7 then you might as well not get involved at >> all - this is 100% untrue). I've seen several assertions, including in >> the review, that any decisions must be documented on the mailing list or >> IRC, and I'm not sure I agree. IMHO, any decisions should be documented >> on the mailing list, period. >> >> I'd love to see more participation on the mailing list. Since it is >> asynchronous already it's somewhat friendlier to those in APAC time >> zones (although there are still issues, real or perceived, with >> decisions being reached before anyone on that side of the world has a >> chance to weigh in), and a lot easier than carrying on a conversation in >> real time for those who don't speak English natively. And while can >> still be technical challenges with mailing lists, almost every company >> allows email through their corporate firewall. >> >> AIUI though, augmenting IRC was not the point of the proposal. Rather, I >> think it was for TC members to 'fly the flag' in WeChat to be more >> visible and available to the portion of the community that is there. >> >> > In that sense, It would be great to have >> > OpenStack community provided, simplified and well-written, written in >> > multiple language, IRC guide docs. Alternatively, if OpenStack >> community >> > can provide a good web-based irc client tool, that would be fantastic. >> >> I haven't tried it but: https://webchat.freenode.net/ >> >> > As I described the above, we can certainly have a healthy discussion on >> > what different and real problems we are facing from Asia. >> > However, I don't think this TC resolution is good way to do that. >> > >> > Cheers, >> > -- >> > >> > Jaesuk Ahn, Team Lead >> > Virtualization SW Lab, SW R&D Center >> > >> > SK Telecom >> > >> > >> > _______________________________________________ >> > openstack-sigs mailing list >> > openstack-sigs at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >> > >> >> >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >> > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Wed Sep 19 23:37:03 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 19 Sep 2018 16:37:03 -0700 Subject: [openstack-dev] [docs][i18n][ptg] Stein PTG Summary Message-ID: <20180919163703.32ec748555e59bd2984a542e@redhat.com> Hi all, Just wanted to share a summary of docs- and i18n-related meetings and discussions we had in Denver last week during the Stein Project Teams Gathering. The overall schedule for all our sessions with additional comments and meeting minutes can be found here: https://etherpad.openstack.org/p/docs-i18n-ptg-stein Our obligatory team picture (with quite a few members missing) can be found here (courtesy of Foundation folks): https://pmkovar.fedorapeople.org/DSC_4422.JPG To summarize what I found most important: OPS DOCS We met with the Ops community to discuss the future of Ops docs. The plan is for the Ops group to take ownership of the operations-guide (done), ha-guide (in progress), and the arch-design guide (to do). These three documents are being moved from openstack-manuals to their own repos, owned by the newly formed Operations Documentation SIG. See also https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018-operations-guide for more notes. DOCS SITE & DESIGN We discussed improving the site navigation, guide summaries (particularly install-guide), adding a new index page for project team contrib guides, and more. We met with the Foundation staff to discuss the possibility of getting assistance with site design work. We are also looking into accepting contributions from the Strategic Focus Areas folks to make parts of the docs toolchain like openstackdocstheme more easily reusable outside of the official OpenStack infrastructure. We got feedback on front page template for project team docs, with Ironic being the pilot for us. We got input on restructuring and reworking specs site to make it easier for users to understand that specs are not feature descriptions nor project docs, and to make it more consistent in how the project teams publish their specs. This will need to be further discussed with the folks owning the specs site infra. Support status badges showing at the top of docs.o.o pages may not work well for projects following the cycle-with-intermediary release model, such as Swift. We need to rethink how we configure and present the badges. There are also some UX bugs present for the badges (https://bugs.launchpad.net/openstack-doc-tools/+bug/1788389). TRANSLATIONS We met with the infra team to discuss progress on translating project team docs and, related to that, PDF builds. With the Foundation staff, we discussed translating Edge and Container whitepapers and related material. REFERENCE, REST API DOCS AND RELEASE NOTES With the QA team, we discussed the scope and purpose of the /doc/source/reference documentation area in project docs. Because the scope of /reference might be unclear and used inconsistently by project teams, the suggestion is to continue with the original migration plan and migrate REST API and possibly Release Notes under /doc/source, as described in https://docs.openstack.org/doc-contrib-guide/project-guides.html. CONTRIBUTOR GUIDE The OpenStack Contributor Guide was discussed in a separate session, see https://etherpad.openstack.org/p/FC_SIG_ptg_stein for notes. THAT'S IT? Please add to the list if I missed anything important, particularly for i18n. Thank you to everybody who attended the sessions, and a special thanks goes to all the PTG organizers! Cheers, pk From yuxcer at gmail.com Wed Sep 19 23:53:29 2018 From: yuxcer at gmail.com (Xingchao) Date: Thu, 20 Sep 2018 11:53:29 +1200 Subject: [openstack-dev] [Openstack-operators][horizon] Dashboard memory leaks Message-ID: Hi All, Recently, we found the server which hosts horizon dashboard had serveral times OOM caused by horizon services. After restarting the dashboard, the memory usage goes up very quickly if we access /project/network_topology/ path. *How to reproduce* Login into the dashboard and go to 'Network Topology' tab, then leave it there (autorefresh 10s by default), now monitor the memory changes on the host. *Versions and Components* Dashboard: Stable/Pike Server: uWSGI 1.9.17-1 OS: Ubuntu 14.04 trusty Python: 2.7.6 As the codes of memoized has little changes since Pike, if you use Queen/Rocky release, you may also succeed to reproduce it. *The investigation* The root cause of the memory leak is the decorator memorized(horizon/utils/memoized.py) which is used to cache function calls in Horizon. After disable it, the memory increases has been controlled. The following is the comparison of memory change(with guppy) for each request of /project/network_topology: - original (no code change) 684kb - do garbage collection manually 185kb - disable memorize cache 10kb As we known, memoized uses weakref to cache objects. A weak reference to an object is not enough to keep the object alive: when the only remaining references to a referent are weak references, garbage collection is free to destroy the referent and reuse its memory for something else. In the memory, we could see lots of weakref stuffs, the following is a example: *Partition of a set of 394 objects. Total size = 37824 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 197 50 18912 50 18912 50 _cffi_backend.CDataGCP 1 197 50 18912 50 37824 100 weakref.KeyedRefq* But the rest of them are not. the following result is the memory objects changes of per /project/network_topology access with garbage collection manually. *Partition of a set of 1017 objects. Total size = 183680 bytes. Index Count % Size % Cumulative % Referrers by Kind (class / dict of class) 0 419 41 58320 32 58320 32 dict (no owner) 1 100 10 23416 13 81736 44 list 2 135 13 15184 8 96920 53 3 2 0 6704 4 103624 56 urllib3.connection.VerifiedHTTPSConnection 4 2 0 6704 4 110328 60 urllib3.connectionpool.HTTPSConnectionPool 5 1 0 3352 2 113680 62 novaclient.v2.client.Client 6 2 0 2096 1 115776 63 OpenSSL.SSL.Connection 7 2 0 2096 1 117872 64 OpenSSL.SSL.Context 8 2 0 2096 1 119968 65 Queue.LifoQueue 9 12 1 2096 1 122064 66 dict of urllib3.connectionpool.HTTPSConnectionPool* The most of them are dicts. Followings are the dicts sorted by class, as you can see most of them are not weakref objects: *Partition of a set of 419 objects. Total size = 58320 bytes. Index Count % Size % Cumulative % Class 0 362 86 50712 87 50712 87 unicode 1 27 6 3736 6 54448 93 list 2 5 1 2168 4 56616 97 dict 3 22 5 1448 2 58064 100 str 4 2 0 192 0 58256 100 weakref.KeyedRef 5 1 0 64 0 58320 100 keystoneauth1.discover.Discover* *The issue* So the problem is that memoized does not work like what we expect. It allocates memory to cache objects but some of them could not be released. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Thu Sep 20 01:43:24 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 19 Sep 2018 20:43:24 -0500 Subject: [openstack-dev] [cinder] Proposed Changes to the Core Team ... Message-ID: <44a417c5-118e-f0ee-61a5-3eae398e64bb@gmail.com> All, In the last year we have had some changes to Core team participation.  This was a topic of discussion at the PTG in Denver last week.  Based on that discussion I have reached out to John Griffith and Winston D (Huang Zhiteng) and asked if they felt they could continue to be a part of the Core Team.  Both agreed that it was time to relinquish their titles. So, I am proposing to remove John Griffith and Winston D from Cinder Core.  If I hear no concerns with this plan in the next week I will remove them. It is hard to remove people who have been so instrumental to the early days of Cinder.  Your past contributions are greatly appreciated and the team would be happy to have you back if circumstances every change. Sincerely, Jay Bryant (jungleboyj) From jimmy at openstack.org Thu Sep 20 02:04:45 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 19 Sep 2018 21:04:45 -0500 Subject: [openstack-dev] Fwd: Denver Ops Meetup post-mortem In-Reply-To: References: Message-ID: <5BA3003D.7020405@openstack.org> Thanks for the thorough write-up as well as the detailed feedback. I'm including some of my notes from the Ops Meetup Feedback session just a bit below, as well as some comments inline. One of the critical things that would help both the Ops and Dev community is to have a holistic sense of what the Ops Meetup goals are. * Were the goals well defined ahead of the event? * Were they achieved and/or how can the larger OpenStack community help them achieve them? From our discussion at the Feedback session, this isn't something that has been tracked in the past. Having actionable, measurable goals coming out of the Ops Meetup could go a long way towards helping the projects realize them. Per our discussion, being able to present this list to the User Committee would be a good step forward for each event. I wasn't able to attend the entire time, but a couple of interesting notes: * The knowledge of deployment tools seemed pretty fragmented and it seemed like there was a desire for more clear and comprehensive documentation comparing the different deployment options, as well as documentation about how to get started with a POC. * Bare Metal in the Datacenter: It was clear that we need more Ironic 101 content and education, including how to get started, system requirements, etc. We can dig up presentations from previous Summits and also talked to TheJulia about potentially hosting a community meeting or producing another video leading up to the Berlin Summit. * Here are the notes from the sessions in case anyone on the ops list is interested: https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018 It looks like there were some action items documented at the bottom of this etherpad: https://etherpad.openstack.org/p/ops-denver-2018-further-work Ops Meetup Feedback Takeways from Feedback Session not covered below (mostly from https://etherpad.openstack.org/p/uc-stein-ptg) Chris Morgan wrote: --SNIP -- > What went well > > - some of the sessions were great and a lot of progress was made > - overall attendance in the ops room was good We had to add 5 tables to accommodate the additional attendees. It was a great crowd! > - more developers were able to join the discussions Given that this is something that wouldn't happen at a normal Ops Meetup, is there a way that would meet the Ops Community needs that we could help facilitate this int he future? > - facilities were generally fine > - some operators leveraged being at PTG to have useful involvement in > other sessions/discussions such as Keystone, User Committee, > Self-Healing SIG, not to mention the usual "hallway conversations", > and similarly some project devs were able to bring pressing questions > directly to operators. > > What didn't go so well: > > - Merging into upgrade SIG didn't go particularly well This is a tough one b/c of the fluidity of the PTG. Agreed that one can end up missing a good chunk of the discussion. OTOH, the flexibility of hte event is what allows great discussions to take place. In the future, I think better coordination w/ specific project teams + updating the PTGBot could help make sure the schedules are in synch. > - fewer ops attended (in particular there were fewer from outside the US) Do you have demographics on the Ops Meetup in Japan or NY? Curious to know how those compare to what we saw in Denver. If there is more promotion needed, or indeed these just end up being more continent/regionally focused? > - Some of the proposed sessions were not well vetted Are there any suggestions on how to improve this moving forward? Perhaps a CFP style submission process, with a small vetting group, could help this situation? My understanding was the Tokyo event, co-located with OpenStack Days, didn't suffer this problem. > - some ops who did attend stated the event identity was diluted, it > was less attractive I'd love some more info on this. Please have these people reach out to let me know how we can fix this in the future. Even if we decide not to hold another Ops Meetup at a PTG, this is relevant to how we run events. > - we tried to adjust the day 2 schedule to include late submissions, > however it was probably too late in some cases > > I don't think it's so important to drill down into all the whys and > wherefores of how we fell down here except to say that the ops meetups > team is a small bunch of volunteers all with day jobs (presumably just > like everyone else on this mailing list). The usual, basically. > > Much more important : what will be done to improve things going forward: > > - The User Committee has offered to get involved with the technical > content. In particular to bring forward topics from other relevant > events into the ops meetup planning process, and then take output from > ops meetups forward to subsequent events. We (ops meetup team) have > welcomed this. This is super critical IMO. One of the things we discussed at the Ops Meetup Feedback session (co-located w/ the UC Meeting) was to provide actionable list of takeaways from the meetup as well as measurable list of how you'd like to see them fixed. From the conversation, this isn't something that has occurred before at Ops Meetups, but I think this would be a huge step forward in working towards a solution to your problems. > > - The Ops Meetups Team will endeavor to start topic selection earlier > and have a more critical approach. Having a longer list of possible > sessions (when starting with material from earlier events) should make > it at least possible to devise a better agenda. Agenda quality drives > attendance to some extent and so can ensure a virtuous circle. Agreed 100%. For the Forum, we start about 2 months out. I think it's worth looking at that process to see if anything can be gained there. I'm very happy to assist with advice on this one... > > - We need to work out whether we're doing fixed schedule events > (similar to previous mid-cycle Ops Meetups) or fully flexible > PTG-style events, but grafting one onto the other ad-hoc clearly is a > terrible idea. This needs more discussion. +1 > > - The Ops Meetups Team continues to explore strange new worlds, or at > least get in touch with more and more OpenStack operators to find out > what the meetups team and these events could do for them and hence > drive the process better. One specific work item here is to help the > (widely disparate) operator community with technical issues such as > getting setup with the openstack git/gerrit and IRC. The latter is the > preferred way for the community to meet, but is particularly difficult > now with the registered nickname requirement. We will add help > documentation on how to get over this hurdle. The IRC issues haven't affected me, fortunately. I’d love to hear from anyone who attended, so we can share the learnings and discuss next steps…whether that means investing in documentation/education, proposing Forum sessions for the Berlin Summit, etc. Cheers, Jimmy > > - YOUR SUGGESTION HERE > > Chris > > -- > Chris Morgan > > > > -- > Chris Morgan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Sep 20 02:11:12 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 20 Sep 2018 11:11:12 +0900 Subject: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper" In-Reply-To: <79ef4d5a-9816-bacb-2958-60899c021039@inaugust.com> References: <79ef4d5a-9816-bacb-2958-60899c021039@inaugust.com> Message-ID: <165f4bed73b.11e3442fc237712.4614262941605225022@ghanshyammann.com> ---- On Wed, 19 Sep 2018 23:29:46 +0900 Monty Taylor wrote ---- > On 09/19/2018 09:23 AM, Monty Taylor wrote: > > On 09/19/2018 08:25 AM, Chris Dent wrote: > >> > >> I have a patch in progress to add some simple integration tests to > >> placement: > >> > >> https://review.openstack.org/#/c/601614/ > >> > >> They use https://github.com/cdent/gabbi-tempest . The idea is that > >> the method for adding more tests is to simply add more yaml in > >> gate/gabbits, without needing to worry about adding to or think > >> about tempest. > >> > >> What I have at that patch works; there are two yaml files, one of > >> which goes through the process of confirming the existence of a > >> resource provider and inventory, booting a server, seeing a change > >> in allocations, resizing the server, seeing a change in allocations. > >> > >> But this is kludgy in a variety of ways and I'm hoping to get some > >> help or pointers to the right way. I'm posting here instead of > >> asking in IRC as I assume other people confront these same > >> confusions. The issues: > >> > >> * The associated playbooks are cargo-culted from stuff labelled > >> "legacy" that I was able to find in nova's jobs. I get the > >> impression that these are more verbose and duplicative than they > >> need to be and are not aligned with modern zuul v3 coolness. > > > > Yes. Your life will be much better if you do not make more legacy jobs. > > They are brittle and hard to work with. > > > > New jobs should either use the devstack base job, the devstack-tempest > > base job or the devstack-tox-functional base job - depending on what > > things are intended. +1. All the base job from Tempest and Devstack (except grenade which is in progress) are available to use as base for legacy jobs. Using devstack-temepst in your patch is right things. In addition, you need to mention the tox_envlist as all-plugins to make tempest_test_regex work. I commented on review. > > > > You might want to check out: > > > > https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html > > > > also, cmurphy has been working on updating some of keystone's legacy > > jobs recently: > > > > https://review.openstack.org/602452 > > > > which might also be a source for copying from. > > > >> * It takes an age for the underlying devstack to build, I can > >> presumably save some time by installing fewer services, and making > >> it obvious how to add more when more are required. What's the > >> canonical way to do this? Mess with {enable,disable}_service, cook > >> the ENABLED_SERVICES var, do something with required_projects? > > > > http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n190 > > > > Has an example of disabling services, of adding a devstack plugin, and > > of adding some lines to localrc. > > > > > > http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n117 > > > > Has some more complex config bits in it. > > > > In your case, I believe you want to have parent: devstack-tempest > > instead of parent: devstack-tox-functional > > > > > >> * This patch, and the one that follows it [1] dynamically install > >> stuff from pypi in the post test hooks, simply because that was > >> the quick and dirty way to get those libs in the environment. > >> What's the clean and proper way? gabbi-tempest itself needs to be > >> in the tempest virtualenv. > > > > This I don't have an answer for. I'm guessing this is something one > > could do with a tempest plugin? > > K. This: > > http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/.zuul.yaml#n184 Yeah, You can install that via TEMPEST_PLUGINS var. All plugins specified in TEMPEST_PLUGINS var, will be installed into the tempest venv[1]. You can mention the gabbi-tempest same way. [1] https://github.com/openstack-dev/devstack/blob/6f4b7fc99c4029d25a924bcad968089d89e9d296/lib/tempest#L663 -gmann > > Has an example of a job using a tempest plugin. > > >> * The post.yaml playbook which gathers up logs seems like a common > >> thing, so I would hope could be DRYed up a bit. What's the best > >> way to that? > > > > Yup. Legacy devstack-gate based jobs are pretty terrible. > > > > You can delete the entire post.yaml if you move to the new devstack base > > job. > > > > The base devstack job has a much better mechanism for gathering logs. > > > >> Thanks very much for any input. > >> > >> [1] perf logging of a loaded placement: > >> https://review.openstack.org/#/c/602484/ > >> > >> > >> > >> __________________________________________________________________________ > >> > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mtreinish at kortar.org Thu Sep 20 02:47:20 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 19 Sep 2018 22:47:20 -0400 Subject: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper" In-Reply-To: <165f4bed73b.11e3442fc237712.4614262941605225022@ghanshyammann.com> References: <79ef4d5a-9816-bacb-2958-60899c021039@inaugust.com> <165f4bed73b.11e3442fc237712.4614262941605225022@ghanshyammann.com> Message-ID: <20180920024720.GA18981@zeong> On Thu, Sep 20, 2018 at 11:11:12AM +0900, Ghanshyam Mann wrote: > ---- On Wed, 19 Sep 2018 23:29:46 +0900 Monty Taylor wrote ---- > > On 09/19/2018 09:23 AM, Monty Taylor wrote: > > > On 09/19/2018 08:25 AM, Chris Dent wrote: > > >> > > >> I have a patch in progress to add some simple integration tests to > > >> placement: > > >> > > >> https://review.openstack.org/#/c/601614/ > > >> > > >> They use https://github.com/cdent/gabbi-tempest . The idea is that > > >> the method for adding more tests is to simply add more yaml in > > >> gate/gabbits, without needing to worry about adding to or think > > >> about tempest. > > >> > > >> What I have at that patch works; there are two yaml files, one of > > >> which goes through the process of confirming the existence of a > > >> resource provider and inventory, booting a server, seeing a change > > >> in allocations, resizing the server, seeing a change in allocations. > > >> > > >> But this is kludgy in a variety of ways and I'm hoping to get some > > >> help or pointers to the right way. I'm posting here instead of > > >> asking in IRC as I assume other people confront these same > > >> confusions. The issues: > > >> > > >> * The associated playbooks are cargo-culted from stuff labelled > > >> "legacy" that I was able to find in nova's jobs. I get the > > >> impression that these are more verbose and duplicative than they > > >> need to be and are not aligned with modern zuul v3 coolness. > > > > > > Yes. Your life will be much better if you do not make more legacy jobs. > > > They are brittle and hard to work with. > > > > > > New jobs should either use the devstack base job, the devstack-tempest > > > base job or the devstack-tox-functional base job - depending on what > > > things are intended. > > +1. All the base job from Tempest and Devstack (except grenade which is in progress) are available to use as base for legacy jobs. Using devstack-temepst in your patch is right things. In addition, you need to mention the tox_envlist as all-plugins to make tempest_test_regex work. I commented on review. No, all-plugins is incorrect and should never be used. It's only there for legacy support, it is deprecated and I thought we pushed a patch to indicating that (but I can't find it). It tells tox to create a venv with system site-packages enabled and that almost always causes more problems than it fixes. Specifying the plugin with TEMPEST_PLUGINS will make sure the plugin is installed in tempest's venv, and if you need to run a tox job without a preset selection regex (so you can specify your own) you should use the "all" job. (not all-plugins) -Matt Treinish > > > > > > > You might want to check out: > > > > > > https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html > > > > > > also, cmurphy has been working on updating some of keystone's legacy > > > jobs recently: > > > > > > https://review.openstack.org/602452 > > > > > > which might also be a source for copying from. > > > > > >> * It takes an age for the underlying devstack to build, I can > > >> presumably save some time by installing fewer services, and making > > >> it obvious how to add more when more are required. What's the > > >> canonical way to do this? Mess with {enable,disable}_service, cook > > >> the ENABLED_SERVICES var, do something with required_projects? > > > > > > http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n190 > > > > > > Has an example of disabling services, of adding a devstack plugin, and > > > of adding some lines to localrc. > > > > > > > > > http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n117 > > > > > > Has some more complex config bits in it. > > > > > > In your case, I believe you want to have parent: devstack-tempest > > > instead of parent: devstack-tox-functional > > > > > > > > >> * This patch, and the one that follows it [1] dynamically install > > >> stuff from pypi in the post test hooks, simply because that was > > >> the quick and dirty way to get those libs in the environment. > > >> What's the clean and proper way? gabbi-tempest itself needs to be > > >> in the tempest virtualenv. > > > > > > This I don't have an answer for. I'm guessing this is something one > > > could do with a tempest plugin? > > > > K. This: > > > > http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/.zuul.yaml#n184 > > Yeah, You can install that via TEMPEST_PLUGINS var. All plugins specified in TEMPEST_PLUGINS var, will be installed into the tempest venv[1]. You can mention the gabbi-tempest same way. > > [1] https://github.com/openstack-dev/devstack/blob/6f4b7fc99c4029d25a924bcad968089d89e9d296/lib/tempest#L663 > > -gmann > > > > > Has an example of a job using a tempest plugin. > > > > >> * The post.yaml playbook which gathers up logs seems like a common > > >> thing, so I would hope could be DRYed up a bit. What's the best > > >> way to that? > > > > > > Yup. Legacy devstack-gate based jobs are pretty terrible. > > > > > > You can delete the entire post.yaml if you move to the new devstack base > > > job. > > > > > > The base devstack job has a much better mechanism for gathering logs. > > > > > >> Thanks very much for any input. > > >> > > >> [1] perf logging of a loaded placement: > > >> https://review.openstack.org/#/c/602484/ > > >> > > >> > > >> > > >> __________________________________________________________________________ > > >> > > >> OpenStack Development Mailing List (not for usage questions) > > >> Unsubscribe: > > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >> > > > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From chenxingcampus at outlook.com Thu Sep 20 03:24:29 2018 From: chenxingcampus at outlook.com (Chan Chason) Date: Thu, 20 Sep 2018 03:24:29 +0000 Subject: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core In-Reply-To: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> References: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> Message-ID: +1 > 在 2018年9月20日,上午2:50,Petr Kovar 写道: > > Hi all, > > Based on our PTG discussion, I'd like to nominate Ian Y. Choi for > membership in the openstack-doc-core team. I think Ian doesn't need an > introduction, he's been around for a while, recently being deeply involved > in infra work to get us robust support for project team docs translation and > PDF builds. > > Having Ian on the core team will also strengthen our integration with > the i18n community. > > Please let the ML know should you have any objections. > > Thanks, > pk > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From anteaya at anteaya.info Thu Sep 20 04:03:48 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Thu, 20 Sep 2018 00:03:48 -0400 Subject: [openstack-dev] [Openstack-sigs] [Openstack-operators] [tc]Global Reachout Proposal In-Reply-To: <20180918124049.jw7xbufikxfx3w37@yuggoth.org> References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> <20180918124049.jw7xbufikxfx3w37@yuggoth.org> Message-ID: <96743b2c-7d12-0769-9176-746c2d4edbbe@anteaya.info> On 2018-09-18 08:40 AM, Jeremy Stanley wrote: > On 2018-09-18 11:26:57 +0900 (+0900), Ghanshyam Mann wrote: > [...] >> I can understand that IRC cannot be used in China which is very >> painful and mostly it is used weChat. > [...] > > I have yet to hear anyone provide first-hand confirmation that > access to Freenode's IRC servers is explicitly blocked by the > mainland Chinese government. There has been a lot of speculation > that the usual draconian corporate firewall policies (surprise, the > rest of the World gets to struggle with those too, it's not just a > problem in China) are blocking a variety of messaging protocols from > workplace networks and the people who encounter this can't tell the > difference because they're already accustomed to much of their other > communications being blocked at the border. I too have heard from > someone who's heard from someone that "IRC can't be used in China" > but the concrete reasons why continue to be missing from these > discussions. > I'll reply to this email arbitrarily in order to comply with Zhipeng Huang's wishes that the conversation concerned with understanding the actual obstacles to communication takes place on the mailing list. I do hope I am posting to the correct thread. In response to part of your comment on the patch at https://review.openstack.org/#/c/602697/ which you posted about 5 hours ago you said "@Anita you are absolutely right it is only me stuck my head out speaks itself the problem I stated in the patch. Many of the community tools that we are comfortable with are not that accessible to a broader ecosystem. And please assured that I meant I refer the patch to the Chinese community, as Leong also did on the ML, to try to bring them over to join the convo." and I would like to reply. I would like to say that I am honoured by your generosity. Thank you. Now, when the Chinese community consumes the patch, as well as the conversation in the comments, please encourage folks to ask for clarification if any descriptions or phrases don't make sense to them. One of the best ways of ensuring clear communication is to start off slowly and take the time to ask what the other side means. It can seem tedious and a waste of time, but I have found it to be very educational and helpful in understanding how the other person perceives the situation. It also helps me to understand how I am creating obstacles in ways that I talk. Taking time to clarify helps me to adjust how I am speaking so that my meaning is more likely to be understood by the group to which I am trying to offer my perspective. I do appreciate that many people are trying to avoid embarrassment, but I have never found any way to understand people in a culture that is not the one I group up in, other than embarrassing myself and working through it. Usually I find the group I am wanting to understand is more than willing to rescue me from my embarrassment and support me in my learning. In a strange way, the embarrassment is kind of helpful in order to create understanding between myself and those people I am trying to understand. Thank you, Anita From ianyrchoi at gmail.com Thu Sep 20 04:09:04 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Thu, 20 Sep 2018 13:09:04 +0900 Subject: [openstack-dev] [docs][i18n][ptg] Stein PTG Summary In-Reply-To: <20180919163703.32ec748555e59bd2984a542e@redhat.com> References: <20180919163703.32ec748555e59bd2984a542e@redhat.com> Message-ID: <44520f86-4807-398a-fa48-94230d2d3673@gmail.com> Thanks a lot for nice summary on Docs part especially! I would like to add an additional summary with more context from I18n perspective. Note: I mainly participated in Docs/I18n discussion only on Monday & Tuesday (not available during Wed - Fri due to conflicts with other work in my country), and my summary would be different from current I18n PTL if he have parcipated in Stein PTG, but I would like to summarize as I18n ex-PTL (Ocata, Pike) and as one of active partcipants in Docs/I18n discussion. Documentation & I18n teams started to have collaborative discussions from Pike PTG. Following with Queens & Rocky cycle, I am so happy that Documentation & I18n teams had tight collaboration again at Stein PTG for horizontal discussion with sharing issues and tight collaboration. More details for I18n issues are available at the bottom part ("i18n Topics") in: https://etherpad.openstack.org/p/docs-i18n-ptg-stein PROJECT DOCUMENTATION TRANSLATION SUPPORT This year, I18n team actively started to support project documentation translation [1] and there had progress on defining documentation translation targets, generatepot infra jobs, and translation sync on from project repositories to Zanata for translation sources & from Zanata to project repositories for translated strings. [2] and [3] are parts of work I18n team did on previous cycle, and the final part would be how to support translated documentation publication aligned with Documentation team, since PDF support implementation is also related with how to publish PDF files for project repositories. Although there were so nice discussion during last Vancouver Summit [4], more generic idea on infra side how to better support translated documentation & PDF builds and translation would be needed after some changes on Project Testing Interface which is used for project documentation builds [5]. [6] is a nice summary from Doug (really appreciated!) for the direction and plans on PDF and translation builds by making use of openstack-tox-docs job [7], and I18n team would like to continue to work with Documentation and Infrastructure teams on actual implementation suring Stein cycle. USER SURVEY, TRANSLATING WHITEPAPERS, AND RECOGNITION ON TRANSLATORS With nice collaboration between Foundation and I18n team, I18n team started to translate OpenStack user survey [8] after initial discussion on Pike PTG and, edge computing whitepaper [9], and container whitepaer [10] into multiple languages with many language coordinators and translators. Those translation effort might be different from Active Technical Contributor (ATC) recognition which translators also got for OpenStack project translation and techical documentation translation [11]. Some translators shared that they would be interested in translating technical documents but not interested in OpenStack user survey and other non-technical documents. I thought that it might be due to different governance (Foundation-led & official projects with technical committee might be different), and Active User Contributor (AUC) [12] recognition might be a good idea. Although I could not discuss in details with User Committee members during PTG, Foundation agreed that AUC recognition for such translators would be a good idea and Melvin, one of user committee agreed with the idea during a very short conversation. In my opinion, it would take some time for more discussion and agreement on detail criteria (e.g., which number of words might be good aligning with current AUC recognition criteria), User Committee, and Foundation), but let's try to move forward on this :) Also, documentation on detail steps and activities with user survey on further years and more on whitepapers would be important, so I18n team will more document how I18n team members do action with steps like [13]. And some translators shared concerns what there is no I18n in OpenStack project navigator and map. It would be also related with recognition on what translators contributed. Foundation explained that it might be the intention of the purpose of each artifact (e.g., OpenStack were designed to show OpenStack components and how those components interact each other with technical perspective), and Foundation shared that Foundation would do best to aggregate scattered definitions (e.g., [14], [15], and somewhere more) for consistency, and find a nice place for Docs, i18n, Congress, etc... on the Software/Project Navigator. TRANSLATING CONTRIBUTOR GUIDE WITH FIRST CONTACT SIG The detail summary is in [16]. To summarize,  - I shared I18n process to First Contact SIG members. Participants acknowledged that    setting *translation period* would be needed but starting from initial translation process    would be a good idea since it is not translated yet  - Participants think that user groups would be interested in translating contributor guide.  - Me will setup translation sync and publication on contributor guide and Kendall Nelson would kindly    help explain the background of adding contributor guide for translator target. I18N-ING STORYBOARD [17] Although I18n team now uses Launchpad [18] for task management, I think there are mix on multiple language issues on "bugs" section, which prevents from maintance by language coordinators. After discussion on I18n mailing list [19], I thought that it would be very grat if Storyboard is internationalized so that I18n team can migrate from Launchpad to Storyboard and more global contributors access to Storyboard with their native language. One issue regarding I18n-ing Storyboard was that story description for most projects needs to be written in English, but I18n-ing Storyboard might mislead contributors to write description in non-English. Participants thought that adding a warning message on posting pages (e.g., posting a Story) would solve this issue. Participants shared current implementation technologies used in Storyboard and other dashboard projects such as Horizon and TripleO UI. Brainstorming and making use of I18n-ing libraries on Storyboard is an action item at current point. PROCESS ON OPENSTACK-HELM DOC FROM NON-ENGLISH TO ENGLISH* * I participated in a brief discussion with OpenStack-Helm team for documentation contribution from Korean to English. Current documentation and I18n process starts from English to multiple languages, but some active contributors in OpenStack-Helm team would like to contribute to Korean documentation first and translate from Korean into English. My suggestion is to keep current documentation and I18n process but add more additional steps before English rst-based documentation. For example, if OpenStack-Helm team cores agree to have *doc-ko_KR/source* folder, contributing Korean documentation is limited to the folder, Zanata helps to translate from Korean to English on the folder, translated po can make English rsts, and translated rst files are added to *doc/source* folder, there might be no affect on existing documentation & I18n process and Korean documentation contrubution and translation from Korean to English would be encouraged. OpenStack-Helm team agreed with this step, and I think this would be a good step for more Internationalization as a pilot project first but this can be extended to more projects in future. This is my summary on I18n issues discussed during Stein PTG. Thanks a lot for all members who involve and help Documentation and I18n. Although there were no discussion on setting translation plans and priorities, let's do that through [21] with Frank :) And finally, more pictures are available through [22] :) With many thanks, /Ian [1] https://blueprints.launchpad.net/openstack-i18n/+spec/project-doc-translation-support [2] https://review.openstack.org/#/c/545377/ [3] https://review.openstack.org/#/c/581000/ [4] https://etherpad.openstack.org/p/YVR-docs-support-pdf-and-translation [5] https://review.openstack.org/#/c/580495/ [6] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134609.html [7] https://docs.openstack.org/infra/openstack-zuul-jobs/jobs.html#job-openstack-tox-docs [8] https://www.openstack.org/user-survey [9] https://www.openstack.org/edge-computing/cloud-edge-computing-beyond-the-data-center [10] https://www.openstack.org/containers/leveraging-containers-and-openstack/ [11] https://docs.openstack.org/i18n/latest/official_translator.html#atc-status-in-i18n-project [12] https://governance.openstack.org/uc/reference/charter.html#active-user-contributors-auc [13] https://docs.openstack.org/i18n/latest/release_management.html [14] https://git.openstack.org/cgit/openstack/openstack-manuals/tree/www/project-data/rocky.yaml [15] https://github.com/ttx/openstack-map/blob/master/openstack_components.yaml [16] https://etherpad.openstack.org/p/FC_SIG_ptg_stein [17] https://etherpad.openstack.org/p/sb-stein-ptg-planning [18] https://launchpad.net/openstack-i18n [19] http://lists.openstack.org/pipermail/openstack-i18n/2018-September/003307.html [20] http://lists.openstack.org/pipermail/openstack/2018-September/046937.html [21] http://lists.openstack.org/pipermail/openstack-i18n/2018-September/003314.html [22] https://www.dropbox.com/sh/2pmvfkstudih2wf/AAAG2c6C_OXorMRFH66AvboYa/Docs%20%2B%20i18n Petr Kovar wrote on 9/20/2018 8:37 AM: > Hi all, > > Just wanted to share a summary of docs- and i18n-related meetings > and discussions we had in Denver last week during the Stein Project > Teams Gathering. > > The overall schedule for all our sessions with additional comments and > meeting minutes can be found here: > > https://etherpad.openstack.org/p/docs-i18n-ptg-stein > > Our obligatory team picture (with quite a few members missing) can be > found here (courtesy of Foundation folks): > > https://pmkovar.fedorapeople.org/DSC_4422.JPG > > To summarize what I found most important: > > OPS DOCS > > We met with the Ops community to discuss the future of Ops docs. The plan > is for the Ops group to take ownership of the operations-guide (done), > ha-guide (in progress), and the arch-design guide (to do). > > These three documents are being moved from openstack-manuals to their own > repos, owned by the newly formed Operations Documentation SIG. > > See also > https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018-operations-guide > for more notes. > > DOCS SITE & DESIGN > > We discussed improving the site navigation, guide summaries (particularly > install-guide), adding a new index page for project team contrib guides, and > more. We met with the Foundation staff to discuss the possibility of getting > assistance with site design work. > > We are also looking into accepting contributions from the Strategic Focus > Areas folks to make parts of the docs toolchain like openstackdocstheme more > easily reusable outside of the official OpenStack infrastructure. > > We got feedback on front page template for project team docs, with Ironic > being the pilot for us. > > We got input on restructuring and reworking specs site to make it easier > for users to understand that specs are not feature descriptions nor project > docs, and to make it more consistent in how the project teams publish their > specs. This will need to be further discussed with the folks owning the > specs site infra. > > Support status badges showing at the top of docs.o.o pages may not work well > for projects following the cycle-with-intermediary release model, such as > Swift. We need to rethink how we configure and present the badges. > > There are also some UX bugs present for the badges > (https://bugs.launchpad.net/openstack-doc-tools/+bug/1788389). > > TRANSLATIONS > > We met with the infra team to discuss progress on translating project team > docs and, related to that, PDF builds. > > With the Foundation staff, we discussed translating Edge and Container > whitepapers and related material. > > REFERENCE, REST API DOCS AND RELEASE NOTES > > With the QA team, we discussed the scope and purpose of the > /doc/source/reference documentation area in project docs. Because the > scope of /reference might be unclear and used inconsistently by project > teams, the suggestion is to continue with the original migration plan and > migrate REST API and possibly Release Notes under /doc/source, as described > in https://docs.openstack.org/doc-contrib-guide/project-guides.html. > > CONTRIBUTOR GUIDE > > The OpenStack Contributor Guide was discussed in a separate session, see > https://etherpad.openstack.org/p/FC_SIG_ptg_stein for notes. > > THAT'S IT? > > Please add to the list if I missed anything important, particularly for > i18n. > > Thank you to everybody who attended the sessions, and a special thanks goes > to all the PTG organizers! > > Cheers, > pk > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gmann at ghanshyammann.com Thu Sep 20 04:17:40 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 20 Sep 2018 13:17:40 +0900 Subject: [openstack-dev] [nova] When can/should we change additionalProperties=False in GET /servers(/detail)? In-Reply-To: <46cee3db-eecb-97f6-a793-c33d57a71ad2@gmail.com> References: <70abbabe-2480-4c25-0665-a14b2eb5f3ab@gmail.com> <75ef2549-dfba-3267-5e76-0c59c64cd4ac@gmail.com> <165ea8d9f10.add97103175992.5456929857422374986@ghanshyammann.com> <46cee3db-eecb-97f6-a793-c33d57a71ad2@gmail.com> Message-ID: <165f532a147.101be5e29237977.3680194311327740239@ghanshyammann.com> ---- On Wed, 19 Sep 2018 02:26:30 +0900 Matt Riedemann wrote ---- > On 9/17/2018 9:41 PM, Ghanshyam Mann wrote: > > ---- On Tue, 18 Sep 2018 09:33:30 +0900 Alex Xu wrote ---- > > > That only means after 599276 we only have servers API and os-instance-action API stopped accepting the undefined query parameter. > > > What I'm thinking about is checking all the APIs, add json-query-param checking with additionalProperties=True if the API don't have yet. And using another microversion set additionalProperties to False, then the whole Nova API become consistent. > > > > I too vote for doing it for all other API together. Restricting the unknown query or request param are very useful for API consistency. Item#1 in this etherpadhttps://etherpad.openstack.org/p/nova-api-cleanup > > > > If you would like, i can propose a quick spec for that and positive response to do all together then we skip to do that in 599276 otherwise do it for GET servers in 599276. > > > > -gmann > > I don't care too much about changing all of the other > additionalProperties=False in a single microversion given we're already > kind of inconsistent with this in a few APIs. Consistency is ideal, but > I thought we'd be lumping in other cleanups from the etherpad into the > same microversion/spec which will likely slow it down during spec > review. For example, I'd really like to get rid of the weird server > response field prefixes like "OS-EXT-SRV-ATTR:". Would we put those into > the same mass cleanup microversion / spec or split them into individual > microversions? I'd prefer not to see an explosion of microversions for > cleaning up oddities in the API, but I could see how doing them all in a > single microversion could be complicated. Sounds good to me. I also do not feel like increasing microversions for every cleanup. I would like to see all cleanup(worthy cleanup) in single microversion. I have pushed the spec for that for further discussion/debate. - https://review.openstack.org/#/c/603969/ -gmann > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From kennelson11 at gmail.com Thu Sep 20 04:18:25 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 19 Sep 2018 21:18:25 -0700 Subject: [openstack-dev] Fwd: Denver Ops Meetup post-mortem In-Reply-To: References: Message-ID: Hello! On Tue, Sep 18, 2018 at 12:36 PM Chris Morgan wrote: > > > ---------- Forwarded message --------- > From: Chris Morgan > Date: Tue, Sep 18, 2018 at 2:13 PM > Subject: Denver Ops Meetup post-mortem > To: OpenStack Operators > > > Hello All, > Last week we had a successful Ops Meetup embedded in the OpenStack > Project Team Gathering in Denver. > > Despite generally being a useful gathering, there were definitely lessons > learned and things to work on, so I thought it would be useful to share a > post-mortem. I encourage everyone to share their thoughts on this as well. > > What went well: > > - some of the sessions were great and a lot of progress was made > - overall attendance in the ops room was good > - more developers were able to join the discussions > - facilities were generally fine > - some operators leveraged being at PTG to have useful involvement in > other sessions/discussions such as Keystone, User Committee, Self-Healing > SIG, not to mention the usual "hallway conversations", and similarly some > project devs were able to bring pressing questions directly to operators. > > What didn't go so well: > > - Merging into upgrade SIG didn't go particularly well > - fewer ops attended (in particular there were fewer from outside the US) > - Some of the proposed sessions were not well vetted > - some ops who did attend stated the event identity was diluted, it was > less attractive > - we tried to adjust the day 2 schedule to include late submissions, > however it was probably too late in some cases > > I don't think it's so important to drill down into all the whys and > wherefores of how we fell down here except to say that the ops meetups team > is a small bunch of volunteers all with day jobs (presumably just like > everyone else on this mailing list). The usual, basically. > > Much more important : what will be done to improve things going forward: > > - The User Committee has offered to get involved with the technical > content. In particular to bring forward topics from other relevant events > into the ops meetup planning process, and then take output from ops meetups > forward to subsequent events. We (ops meetup team) have welcomed this. > > - The Ops Meetups Team will endeavor to start topic selection earlier and > have a more critical approach. Having a longer list of possible sessions > (when starting with material from earlier events) should make it at least > possible to devise a better agenda. Agenda quality drives attendance to > some extent and so can ensure a virtuous circle. > > - We need to work out whether we're doing fixed schedule events (similar > to previous mid-cycle Ops Meetups) or fully flexible PTG-style events, but > grafting one onto the other ad-hoc clearly is a terrible idea. This needs > more discussion. > > - The Ops Meetups Team continues to explore strange new worlds, or at > least get in touch with more and more OpenStack operators to find out what > the meetups team and these events could do for them and hence drive the > process better. One specific work item here is to help the (widely > disparate) operator community with technical issues such as getting setup > with the openstack git/gerrit and IRC. The latter is the preferred way for > the community to meet, but is particularly difficult now with the > registered nickname requirement. We will add help documentation on how to > get over this hurdle. > After you get onto freenode at IRC you can register your nickname with a single command and then you should be able to join any of the channels. The command you need: ' /msg nickserv register $PASSWORD $EMAIL_ADDRESS'. You can find more instructions here about setting up IRC[1]. If you get stuck or have any questions, please let me know! I am happy to help with the setup of IRC or gerrit or anything else that might be a barrier. > - YOUR SUGGESTION HERE > > Chris > > -- > Chris Morgan > > > -- > Chris Morgan > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -Kendall Nelson (diablo_rojo) [1] https://docs.openstack.org/contributors/common/irc.html# -------------- next part -------------- An HTML attachment was scrubbed... URL: From adriant at catalyst.net.nz Thu Sep 20 05:21:28 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Thu, 20 Sep 2018 17:21:28 +1200 Subject: [openstack-dev] [keystone] noop role in openstack Message-ID: <13ed348c-d81c-55a1-1965-e5a89e5671a0@catalyst.net.nz> For Adam's benefit continuing this a bit in email: regarding the noop role: http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-09-20.log.html#t2018-09-20T04:13:43 The first benefit of such a role (in the given policy scenario) is that you can now give a user explicit scope on a project (but they can't do anything) and then use that role for Swift ACLs with full knowledge they can't do anything other than auth, scope to the project, and then whatever the ACLs let them do. An example use case being: "a user that can ONLY talk to a specific container and NOTHING else in OpenStack or Swift" which is really useful if you want to use a single project for a lot of websites, or backups, or etc. Or in my MFA case, a role I can use when wanting a user to still be able to auth and setup their MFA, but not actually touch any resources until they have MFA setup at which point you give them back their real member role. It all relies on leaving no policy rules 'empty' unless those rules (and their API) really are safe for a noop role. And by empty I don't mean empty, really I mean "any role on a project". Because that's painful to then work with. With the default policies in Nova (and most other projects), you can't actually make proper use of Swift ACLs, because having any role on a project gives you access to all the resources. Like say: https://github.com/openstack/nova/blob/master/nova/policies/base.py#L31 ^ that rule implies, if you are scoped to the project, don't care about the role, you can do anything to the resources. That doesn't work for anything role specific. Such rules would need to be: "is_admin:True or (role:member and project_id:%(project_id)s)" If we stop with this assumption that "any role" on a project works, suddenly policy becomes more powerful and the roles are actually useful beyond admin vs not admin. System scope will help, but then we'll still only have system scope, admin on a project, and not admin on a project, which still makes the role mostly pointless. We as a community need to stop with this assumption (that "any role" on a project works), because it hurts us in regards to actually useful RBAC. Yes deployers can edit the policy to avoid the any role on a project issue (we have), but it's a huge amount of work to figure out that we could all work together and fix upstream. Part of that work is actually happening. With the default roles that Keystone is defining, and system scope. We can then start updating all the project default policies to actually require those roles explicitly, but that effort, needs us to get everyone on board... From rico.lin.guanyu at gmail.com Thu Sep 20 05:22:17 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 20 Sep 2018 13:22:17 +0800 Subject: [openstack-dev] [heat] We need more help for actions and review. And some PTG update for Heat Message-ID: Dear all As we reach Stein and start to discuss what we should do in the next cycle, I would like to raise voice for what kind of help we need and what target we planning. BTW, If you really can't start your contact with us with any English (we don't mind any English skill you are) *First of all, we need more developers, and reviewer. I would very much like to give Heat core reviewer title to people if anyone provides fare quality of reviews. So please help us review patches. Let me know if you would like to be a part but got no clue on how can you started.* Second, we need more help to achieve actions. Here I make a list of actions base on what we discuss from PTG [1]. I mark some of them with (*) if it looks like an easy contribution: - (*) Move interop tempest tests to a separate repo - Move python3 functional job to python3.6 - (*) Implement upgrade check - (*) Copy templates from Cue project into the heat-templates repo - (*) Add Stein template versions - (*) Do document improvement or add documents for: - (*) Heat Event Notification list - Nice to have our own document and provide a link to [2] - default heat service didn't enable notification, so might be mention and link to Notify page - (*) Autoscaling doc - (*) AutoHealing doc - (*) heat agent & heat container agent - (*) external resource - (*) Upgrade guideline - (*) Move document from wiki to in repo document - (*) Fix live properties (observe reality) feature and make sure all resource works - remove any legacy pattern from .zuul.yaml - Improve autoscaling and self-healing - Create Tempest test for self-healing scenario (around Heat integration) - (*) Examine all resource type and help to update if they do not sync up with physical resource If you like to learn more of any above tasks, just reach out to me and other core members, and we're more than happy to give you the background and guideline to any of above tasks. Also, welcome to join our meeting and raise topics for any tasks. We actually got more tasks that need to be achieved (didn't list here maybe because it's already start implemented or still under planning), so if you didn't see any interesting task above, you can reach out to me and let me know which specific area you're interested in. Also, you might want to go through [1] or talk to other team members to see if any more comments added in before you start working on any task. Now here are some targets that we start to discuss or work in progress - Multi-cloud support - Within [5], we propose the ability to do multi-cloud orchestration, and the follow-on discussion is how can we provide the ability to use customized SSL options for multi-cloud or multi-region orchestration without violating any security concerns. What we plan to do now (after discussing with security sig (Barbican team)) is to only support cacert for SSL which is less sensitive. Use a template file to store that cacert and give it to keystone session for providing SSL ability to connections. If that sounds like a good idea to all without much concerns, I will implement it asap. - Autoscaling and self-healing improvement - This is a big complex task for sure and kind of relative to multiple projects. We got a fair number of users using Autoscaling feature, but not much for self-healing for now. So we will focus on each feature and the integration of two feature separately. - First, Heat got the ability to orchestrate autoscaling, but we need to improve the stability. Still go around our code base to see how can we modulize current implementation, and how can we improve from here, but will update more information for all. We also starting to discuss autoscaling integration [3], which hopefully we can get a better solution and combine forces from Heat and Senlin as a long-term target. Please give your feedback if you also care about this target. - For self-healing, we propose some co-work on cross-project gatting in Self-healing-sig, which we still not generate tempest test out, but assume we can start to set up job and raise discussion for how can we help projects to adopt that job. Also, we got discussions with Octavia team ([7], and [8]) and Monasca team about adopting the ability to support event alarm/notification. Which we plan to put into actions. If anyone also think those are important features, please provide your development resources so we can get those feature done in this cycle. - For integrating two scenarios, I try to add more tasks into [6] and eliminate as many as we can. Also, plan to work on document these scenarios down, so everyone can play with autoscaling+self-healing easily. - Glance resource update - We deprecate image resource in Heat for very long time, and now Glance got the ability to download images by URL, we should be able to adopt new image service and renew/add our Image resources. What's missing is the support of this feature in Devstack for we can use it to test on the gate. There's already discussion raised in ML [9] and in PTG [10]. So hopefully we can help to provide a better test before we adopt the feature. - Non-convergence mode deprecation discussion and User survey update - In PTG UC meeting, UC decides to renew User survey for projects. And Heat now already prepared a new question [4] for it. The reason why we raise that question is that we really like to learn from ops/users about what's adoption rate of convergence mode before we deprecated the non-convergence(legacy) mode. We gonna use that data to decide whether or not we're ready for next action. - KeyPair Issue in Heat Stack - A user-scope resource like KeyPair is a known issue for Heat (because all our actions are project-scope). For example, when User A creates Keypair+Instance in Stack. That keypair is specific user A specific. If we update that stack by User B, keypair will not be accessible (since user B didn't get any authorize to get that keypair). Unless User B can access the same keypair or another Keypair with same name and content. - For action and propose solutions, we gonna send a known issue note for users. Also will try to propose either of these two possible solutions, to make Barbican integrated with Nova Keypair, or allow Keypair to change its scope. I aware there already discussion in Nova team about changing to project-scope, but now we kind of waiting for that discussion to generate actions before we can say this issue is covered. - And more - Again, it's not possible to talk about all feature or plan in a single ML. So please take a look at our storyboard [11] if you like to see anything to be improved. Also, it always accelerates tasks when we got more resources to put on. So help us to develop, review, or provide any feedback are very very welcome! For any feedback added in etherpad but didn't get any comments, I will try to raise discussion in meeting for them. And last but not least, we got some sessions in Berlin for a project update and Onboarding. And potentially also have a ops/users feedback forum, and an autoscaling integration forum (if we actually been accepted ). So please let me know how you like to have those sessions to be taken in place, and what you wish to hear/learn from our sessions? [1] https://etherpad.openstack.org/p/2018-Denver-PTG-Heat [2] https://wiki.openstack.org/wiki/SystemUsageData#orchestration.stack..7Bcreate.2Cupdate.2Cdelete.2Csuspend.2Cresume.7D..7Bstart.2Cerror.2Cend.7D : [3] https://etherpad.openstack.org/p/autoscaling-integration-and-feedback [4] https://etherpad.openstack.org/p/heat-user-survey-brainstrom [5] https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/multiple-cloud-support [6] https://storyboard.openstack.org/#!/story/2003690 [7] https://storyboard.openstack.org/#!/story/2003782 [8] https://storyboard.openstack.org/#!/story/2003773 [9] http://lists.openstack.org/pipermail/openstack-dev/2018-August/134019.html [10] https://etherpad.openstack.org/p/stein-ptg-glance-planning [11] storyboard.openstack.org/#!/project/989 -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Sep 20 05:23:26 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 19 Sep 2018 22:23:26 -0700 Subject: [openstack-dev] [First Contact] SIG PTG Summary Message-ID: Hello Everyone! The first half of the week was particularly busy and I know a lot of people wanted to come to the First Contact SIG room that could not, or couldn’t be there for the whole time. So! For those of you interested that maybe couldn’t make it for the whole time, here is a beautiful summary :) And the etherpad if you want that too[1]. State of the Union ============== We started off with a state of the union on all of the outreach/new contributor groups to get everyone involved on the same page so we can better disseminate the info to new comers and potential contributors. Outreachy -------------- The winter round of applications for mentees opens the 19th and remains open until October 30th. There are two Outreachy internships sponsored for this round[2]. Close to the end of this round of internships- they end in March- the next round will kickoff sponsorships and applications for projects in February. Google Summer of Code --------------------------------- We applied and were not accepted last year. We would like to urge the community to apply again. There’s no info on when applications will open to organisations to apply for interns. OpenStack Upstream Institute ---------------------------------------- We continue to offer this before each summit and at a variety of OpenStack Day and OpenInfra Day events. Over the last year we have held the training in Vancouver, Seoul, Krakow, Sao Paulo, and Rio de Janiero. In October, the training will be held at OpenStack Day Nordics in Stockholm[3] and preceeding the summit in Berlin[4]. A modified version will also be held at the Open Source Day at the upcoming Grace Hopper Conference in Houston[5]. Cohort Mentoring Program ----------------------------------- Formerly the Longterm Mentoring Program organized by the Women of OpenStack, the mentoring program has changed hands and gotten a facelift. The program is now cohort style- groups of mentees and mentors focusing on a single goal like passing the COA, or getting your first patch merged- rather than the 1x1 loosely timeboxed model. Its also now organized and mediated by the Diversity Working Group. If you are interested in getting involved, more details are here[6]. Organization Guide =============== Back in Sydney, we started discussing creating a guide for organizations to educate them on what their contributors need to be effective and successful members of the community. There is a patch out for review right now[7] that we want to get merged as soon as possible so that we can publicize it in Berlin and start introducing it when new companies join the foundation. It was concluded that we needed to add more rationalizations to the requirements and we delegated those out to ricolin, jungleboyj, and spotz to help mattoliverau with content. As soon as this patch gets merged, I volunteered to work to get it onto the soonest board meeting possible. Operator Inclusion/ Ops Feedback ========================== Unfortunately, many operators were in the Operators room- yeah..they got scheduled to overlap..oh well. We did have a few representatives join us though. Basically we concluded that the operator section of the Contributor Guide is wholly unattractive as it’s a daunting outline of a bunch of things that aren’t immediately obvious as important to Operators. It needs to be broken up and the ‘Allows you to’ subsections of each part of the outline need to be moved up to the top level so that operators can more easily see and understand why sections of it are important. There are a few other cosmetic things that also need to be resolved- more details are in the etherpad from Tuesday’s discussions lines 49-62[1]. The operators are also currently trying to get their docs up and running again after having been unsupported, partially migrated to wikis, and then moved back to a repository. Once these are a little more fleshed out and settled, we will link to them from the Contributor Guide. I also attended the Operator’s room on Monday and tried to put a call out for a single point of contact, like a project liaison, so that any operators we see asking for help or resources can be directed to that point of contact. No one stepped up during the meeting, but its still something we see as being important, and will keep pushing to get one or two names to direct new operators to. Forum Proposals ============= Submissions are now open! We have an etherpad from the brainstorming period[8], but basically we want a forum session that will focus on the Operator section of the Contributor guide and jimmymacarthur volunteered to write up and submit this. The only other topic we really discussed around Berlin was a Meet & Greet sort of room that we would request during the call for BoF’s for the summit. This call recently went out, and I will put in the request by the end of the week. Translation of the Contributor Guide =========================== Basically, we want it translated- code & docs, operators, users, and organisations sections; all of it. The discussion focused on a lot of the nitty gritty- what sort of translators would be interested in helping- technical or more high level translators. What is our timeline? When would our string freezes be? The todo’s that came about were for me to talk to ashfergs about getting this on the discussion docket for the Ambassadors and to help ianychoi with setting up the connection between the repo and zanata. Contributor Landing Page on Docs.o.o ============================= This was a short discussion with pkovar about how we might want to style this. We talked about having links into the various parts of the contributor guide and having a list of links to all of the contribution sections of the project specific docs. No real todo’s came out of this, but its definitely something we will want to circle back to once some of the other day’s todo’s get accomplished. Summary & Contacting Us Further ========================== Lots happened and I am probably missing things (it was already a week ago and it was a bit chaotic at times), but I think I hit on most of the big stuff. Good news is we had a few people step up to take things on so its not all on me ;) We continue our biweekly meetings at 7:00 UTC in #openstack-meeting if you can join us! Otherwise we mostly all live on IRC in a variety of channels #openstack-dev, #openstack-upstream-institute, #openstack-doc. Our irc nicks and timezones are listed on our wikipage here[9]! If asynchronous communication is more your style, we use the [First Contact] tag on both the openstack-sigs and openstack-dev mailing lists. See you around :) -Kendall Nelson (diablo_rojo) [1] https://etherpad.openstack.org/p/FC_SIG_ptg_stein [2] https://www.outreachy.org/apply/project-selection/ [3] http://stockholm.openstacknordic.org/program [4] https://www.openstack.org/summit/berlin-2018/summit-schedule/global-search?t=Upstream+Institute [5] https://ghc.anitab.org/2018-attend/schedule-overview/open-source-day/ [6] https://wiki.openstack.org/wiki/Mentoring#Long_Term_Mentoring [7] https://review.openstack.org/#/c/578676 [8] https://etherpad.openstack.org/p/FC_SIG_BER_Planning [9] https://wiki.openstack.org/wiki/First_Contact_SIG -------------- next part -------------- An HTML attachment was scrubbed... URL: From liu.xuefeng1 at zte.com.cn Thu Sep 20 06:32:37 2018 From: liu.xuefeng1 at zte.com.cn (liu.xuefeng1 at zte.com.cn) Date: Thu, 20 Sep 2018 14:32:37 +0800 (CST) Subject: [openstack-dev] =?utf-8?b?562U5aSNOiAgW3Nlbmxpbl0gU2VubGluIE1v?= =?utf-8?q?nthly=28ish=29_Newsletter_Sep_2018?= In-Reply-To: References: CAN81NT5gfNnXWFw4zVq0j_zd_UMbfemWPs2X4SajLCUmdF+yXg@mail.gmail.com Message-ID: <201809201432373305494@zte.com.cn> Great. 原始邮件 发件人:DucTruong 收件人:openstack-dev at lists.openstack.org 日 期 :2018年09月20日 02:15 主 题 :[openstack-dev] [senlin] Senlin Monthly(ish) Newsletter Sep 2018 HTML: https://dkt26111.wordpress.com/2018/09/19/senlin-monthlyish-newsletter-september-2018/ This is the inaugural Senlin monthly(ish) newsletter. The goal of the newsletter is to highlight happenings in the Senlin project. If you have any feedback or questions regarding the contents, please feel free to reach out to me or anyone in the #senlin IRC channel. News ---- * Senlin weekly meeting time was changed at the beginning of the Stein cycle to 5:30 UTC every Friday. Feel free to drop in. * Two new core members were added to the Senlin project. Welcome jucross and eandersson. * One new stable reviewer was added for Senlin stable maintainance. Welcome chenyb4. * Autoscaling forum is being proposed for the Berlin Summit (http://lists.openstack.org/pipermail/openstack-dev/2018-September/134770.html). Add your comments/feedback to this etherpad: https://etherpad.openstack.org/p/autoscaling-integration-and-feedback Blueprint Status ---------------- * Fail fast locked resource - https://blueprints.launchpad.net/senlin/+spec/fail-fast-locked-resource - Spec was approved and implementation is WIP. * Multiple detection modes - https://blueprints.launchpad.net/senlin/+spec/multiple-detection-modes - Spec approval is pending (https://review.openstack.org/#/c/601471/). * Fail-fast on cooldown for scaling operations - Waiting for blueprint/spec submission. * OpenStackSDK support senlin function test - Waiting for blueprint submission. * Senlin add support use limit return - Waiting for blueprint submission. * Add zun driver in senlin, use zun manager container - Waiting for blueprint submission. Community Goal Status --------------------- * Python 3 - All patches by Python 3 goal champions for zuul migration, documentation and unit test changes have been merged. * Upgrade Checkers - No work has started on this. If you like to help out with this task, please let me know. Recently Merged Changes ----------------------- * Bug# 1777774 was fixed (https://review.openstack.org/#/c/594643/) * Improvements to node poll URL mode in health policy (https://review.openstack.org/#/c/588674/) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Thu Sep 20 06:49:25 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Thu, 20 Sep 2018 08:49:25 +0200 Subject: [openstack-dev] =?utf-8?b?Pz09P3V0Zi04P3E/ID89PT91dGYtOD9xPyBb?= =?utf-8?q?docs=5D_Nominating_Ian_Y=2E_Choi_for_openstack-doc-core?= In-Reply-To: <83015120-6adb-e11b-3d40-d2dd57b773c8@gmail.com> Message-ID: <317c-5ba34300-f-3a6b32c0@224927470> > > > > Based on our PTG discussion, I'd like to nominate Ian Y. Choi for > > membership in the openstack-doc-core team. I think Ian doesn't need an > > introduction, he's been around for a while, recently being deeply involved > > in infra work to get us robust support for project team docs translation and > > PDF builds. > > > > Having Ian on the core team will also strengthen our integration with > > the i18n community. > > > > Please let the ML know should you have any objections. > Petr, > > Not a doc Core but wanted to add my support.  Agree he would be a great > addition.  Appreciate all he does for i18n, docs and OpenStack! > > Jay Likewise. Great addition! From balazs.gibizer at ericsson.com Thu Sep 20 07:34:31 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 20 Sep 2018 09:34:31 +0200 Subject: [openstack-dev] Nominating Tetsuro Nakamura for placement-core In-Reply-To: References: Message-ID: <1537428871.22384.0@smtp.office365.com> On Wed, Sep 19, 2018 at 5:25 PM, Chris Dent wrote: > > > I'd like to nominate Tetsuro Nakamura for membership in the > placement-core team. Throughout placement's development Tetsuro has > provided quality reviews; done the hard work of creating rigorous > functional tests, making them fail, and fixing them; and implemented > some of the complex functionality required at the persistence layer. > He's aware of and respects the overarching goals of placement and has > demonstrated pragmatism when balancing those goals against the > requirements of nova, blazar and other projects. > > Please follow up with a +1/-1 to express your preference. No need to > be an existing placement core, everyone with an interest is welcome. I'm soft +1 on Tetsuro. I'm +1 as the code and reviews I read from Tetsuro looks solid to me. I'm only soft +1 as I did not interface with Tetsuro enough to express a hard opinion. Cheers, gibi > > Thanks. > > -- > Chris Dent ٩◔̯◔۶ > https://anticdent.org/ > reenode: cdent tw: @anticdent > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zhipengh512 at gmail.com Thu Sep 20 08:39:43 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 20 Sep 2018 16:39:43 +0800 Subject: [openstack-dev] [Openstack-sigs] [Openstack-operators] [tc]Global Reachout Proposal In-Reply-To: <96743b2c-7d12-0769-9176-746c2d4edbbe@anteaya.info> References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> <20180918124049.jw7xbufikxfx3w37@yuggoth.org> <96743b2c-7d12-0769-9176-746c2d4edbbe@anteaya.info> Message-ID: Thanks Anita, will definitely do as you kindly suggested :) On Thu, Sep 20, 2018, 12:04 PM Anita Kuno wrote: > On 2018-09-18 08:40 AM, Jeremy Stanley wrote: > > On 2018-09-18 11:26:57 +0900 (+0900), Ghanshyam Mann wrote: > > [...] > >> I can understand that IRC cannot be used in China which is very > >> painful and mostly it is used weChat. > > [...] > > > > I have yet to hear anyone provide first-hand confirmation that > > access to Freenode's IRC servers is explicitly blocked by the > > mainland Chinese government. There has been a lot of speculation > > that the usual draconian corporate firewall policies (surprise, the > > rest of the World gets to struggle with those too, it's not just a > > problem in China) are blocking a variety of messaging protocols from > > workplace networks and the people who encounter this can't tell the > > difference because they're already accustomed to much of their other > > communications being blocked at the border. I too have heard from > > someone who's heard from someone that "IRC can't be used in China" > > but the concrete reasons why continue to be missing from these > > discussions. > > > > I'll reply to this email arbitrarily in order to comply with Zhipeng > Huang's wishes that the conversation concerned with understanding the > actual obstacles to communication takes place on the mailing list. I do > hope I am posting to the correct thread. > > In response to part of your comment on the patch at > https://review.openstack.org/#/c/602697/ which you posted about 5 hours > ago you said "@Anita you are absolutely right it is only me stuck my > head out speaks itself the problem I stated in the patch. Many of the > community tools that we are comfortable with are not that accessible to > a broader ecosystem. And please assured that I meant I refer the patch > to the Chinese community, as Leong also did on the ML, to try to bring > them over to join the convo." and I would like to reply. > > I would like to say that I am honoured by your generosity. Thank you. > Now, when the Chinese community consumes the patch, as well as the > conversation in the comments, please encourage folks to ask for > clarification if any descriptions or phrases don't make sense to them. > One of the best ways of ensuring clear communication is to start off > slowly and take the time to ask what the other side means. It can seem > tedious and a waste of time, but I have found it to be very educational > and helpful in understanding how the other person perceives the > situation. It also helps me to understand how I am creating obstacles in > ways that I talk. > > Taking time to clarify helps me to adjust how I am speaking so that my > meaning is more likely to be understood by the group to which I am > trying to offer my perspective. I do appreciate that many people are > trying to avoid embarrassment, but I have never found any way to > understand people in a culture that is not the one I group up in, other > than embarrassing myself and working through it. Usually I find the > group I am wanting to understand is more than willing to rescue me from > my embarrassment and support me in my learning. In a strange way, the > embarrassment is kind of helpful in order to create understanding > between myself and those people I am trying to understand. > > Thank you, Anita > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Thu Sep 20 08:55:31 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 20 Sep 2018 10:55:31 +0200 Subject: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper" In-Reply-To: <20180920024720.GA18981@zeong> References: <165f4bed73b.11e3442fc237712.4614262941605225022@ghanshyammann.com> <20180920024720.GA18981@zeong> Message-ID: <2288307.xHQO2eDbp7@whitebase.usersys.redhat.com> On Thursday, 20 September 2018 04:47:20 CEST Matthew Treinish wrote: > On Thu, Sep 20, 2018 at 11:11:12AM +0900, Ghanshyam Mann wrote: > > ---- On Wed, 19 Sep 2018 23:29:46 +0900 Monty Taylor > > wrote ----> > > > On 09/19/2018 09:23 AM, Monty Taylor wrote: > > > > On 09/19/2018 08:25 AM, Chris Dent wrote: > > > >> I have a patch in progress to add some simple integration tests to > > > >> > > > >> placement: > > > >> https://review.openstack.org/#/c/601614/ > > > >> > > > >> They use https://github.com/cdent/gabbi-tempest . The idea is that > > > >> the method for adding more tests is to simply add more yaml in > > > >> gate/gabbits, without needing to worry about adding to or think > > > >> about tempest. > > > >> > > > >> What I have at that patch works; there are two yaml files, one of > > > >> which goes through the process of confirming the existence of a > > > >> resource provider and inventory, booting a server, seeing a change > > > >> in allocations, resizing the server, seeing a change in allocations. > > > >> > > > >> But this is kludgy in a variety of ways and I'm hoping to get some > > > >> help or pointers to the right way. I'm posting here instead of > > > >> asking in IRC as I assume other people confront these same > > > >> confusions. The issues: > > > >> > > > >> * The associated playbooks are cargo-culted from stuff labelled > > > >> > > > >> "legacy" that I was able to find in nova's jobs. I get the > > > >> impression that these are more verbose and duplicative than they > > > >> need to be and are not aligned with modern zuul v3 coolness. > > > > > > > > Yes. Your life will be much better if you do not make more legacy > > > > jobs. > > > > They are brittle and hard to work with. > > > > > > > > New jobs should either use the devstack base job, the > > > > devstack-tempest > > > > base job or the devstack-tox-functional base job - depending on what > > > > things are intended. > > > > +1. All the base job from Tempest and Devstack (except grenade which is in > > progress) are available to use as base for legacy jobs. Using > > devstack-temepst in your patch is right things. In addition, you need to > > mention the tox_envlist as all-plugins to make tempest_test_regex work. I > > commented on review. > No, all-plugins is incorrect and should never be used. It's only there for > legacy support, it is deprecated and I thought we pushed a patch to > indicating that (but I can't find it). This one? https://review.openstack.org/#/c/543974/ Ciao -- Luigi From majopela at redhat.com Thu Sep 20 09:06:04 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Thu, 20 Sep 2018 11:06:04 +0200 Subject: [openstack-dev] [neutron] Core status In-Reply-To: <20180919193151.oq6rdivmuzue4ghu@bishop> References: <20180919193151.oq6rdivmuzue4ghu@bishop> Message-ID: Good luck Gary, thanks for all those years on Neutron! :) Best regards, Miguel Ángel On Wed, Sep 19, 2018 at 9:32 PM Nate Johnston wrote: > On Wed, Sep 19, 2018 at 06:19:44PM +0000, Gary Kotton wrote: > > > I have recently transitioned to a new role where I will be working on > other parts of OpenStack. Sadly I do not have the necessary cycles to > maintain my core responsibilities in the neutron community. Nonetheless I > will continue to be involved. > > Thanks for everything you've done over the years, Gary. I know I > learned a lot from your reviews back when I was a wee baby Neutron > developer. Best of luck on what's next! > > Nate > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Miguel Ángel Ajo OSP / Networking DFG, OVN Squad Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at johngarbutt.com Thu Sep 20 09:16:34 2018 From: john at johngarbutt.com (John Garbutt) Date: Thu, 20 Sep 2018 10:16:34 +0100 Subject: [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter Message-ID: Hi, Following on from the PTG discussions, I wanted to bring everyone's attention to Nova's plans to deprecate ComputeCapabilitiesFilter, including most of the the integration with Ironic Capabilities. To be specific, this is my proposal in code form: https://review.openstack.org/#/c/603102/ Once the code we propose to deprecate is removed we will stop using capabilities pushed up from Ironic for 'scheduling', but we would still pass capabilities request in the flavor down to Ironic (until we get some standard traits and/or deploy templates sorted for things like UEFI). Functionally, we believe all use cases can be replaced by using the simpler placement traits (this is more efficient than post placement filtering done using capabilities): https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/ironic-driver-traits.html Please note the recent addition of forbidden traits that helps improve the usefulness of the above approach: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-forbidden-traits.html For example, a flavor request for GPUs >= 2 could be replaced by a custom trait trait that reports if a given Ironic node has CUSTOM_MORE_THAN_2_GPUS. That is a bad example (longer term we don't want to use traits for this, but that is a discussion for another day) but it is the example that keeps being raised in discussions on this topic. The main reason for reaching out in this email is to ask if anyone has needs that the ResourceClass and Traits scheme does not currently address, or can think of a problem with a transition to the newer approach. Many thanks, John Garbutt IRC: johnthetubaguy -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at johngarbutt.com Thu Sep 20 09:43:00 2018 From: john at johngarbutt.com (John Garbutt) Date: Thu, 20 Sep 2018 10:43:00 +0100 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: Message-ID: tl;dr +1 consistent names I would make the names mirror the API ... because the Operator setting them knows the API, not the code Ignore the crazy names in Nova, I certainly hate them Lance Bragstad wrote: > I'm curious if anyone has context on the "os-" part of the format? My memory of the Nova policy mess... * Nova's policy rules traditionally followed the patterns of the code ** Yes, horrible, but it happened. * The code used to have the OpenStack API and the EC2 API, hence the "os" * API used to expand with extensions, so the policy name is often based on extensions ** note most of the extension code has now gone, including lots of related policies * Policy in code was focused on getting us to a place where we could rename policy ** Whoop whoop by the way, it feels like we are really close to something sensible now! Lance Bragstad wrote: > Thoughts on using create, list, update, and delete as opposed to post, > get, put, patch, and delete in the naming convention? > I could go either way as I think about "list servers" in the API. But my preference is for the URL stub and POST, GET, etc. On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad wrote: > If we consider dropping "os", should we entertain dropping "api", too? Do >> we have a good reason to keep "api"? >> I wouldn't be opposed to simple service types (e.g "compute" or >> "loadbalancer"). >> > +1 The API is known as "compute" in api-ref, so the policy should be for "compute", etc. From: Lance Bragstad > The topic of having consistent policy names has popped up a few times this week. I would love to have this nailed down before we go through all the policy rules again. In my head I hope in Nova we can go through each policy rule and do the following: * move to new consistent policy name, deprecate existing name * hardcode scope check to project, system or user ** (user, yes... keypairs, yuck, but its how they work) ** deprecate in rule scope checks, which are largely bogus in Nova anyway * make read/write/admin distinction ** therefore adding the "noop" role, amount other things Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Sep 20 10:20:54 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 20 Sep 2018 12:20:54 +0200 Subject: [openstack-dev] [Openstack-sigs] [tc]Global Reachout Proposal In-Reply-To: References: <55c81a61-5670-c1c3-20f1-aa4d8153c8a2@redhat.com> <08783f7e-8201-e6ad-993b-dbb0193de536@redhat.com> Message-ID: <52331323-2e59-11e2-3bfe-2b450ea45931@openstack.org> Melvin Hillsman wrote: > https://thelounge.chat/ >   - have not tried it yet but looks promising especially self-hosted option We had a discussion in the infra room on TheLounge: there is a long-standing infra spec request to offer such a service on our project infrastructure, and it would go a long way to solve the "default experience" as it provides a nice UI by default with "always connected" behavior that most people expect from such communication media those days. The main blocker AIUI is to integrate it with some authentication mechanism(s)... -- Thierry Carrez (ttx) From amotoki at gmail.com Thu Sep 20 10:58:25 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 20 Sep 2018 19:58:25 +0900 Subject: [openstack-dev] [neutron] Core status In-Reply-To: References: Message-ID: Thanks Gary for long years on Neutron. You are the only core before I became quantum-core. I lose you now.... Anyway good luck for your new role. Thanks, Akihiro 2018年9月20日(木) 3:20 Gary Kotton : > Hi, > > I have recently transitioned to a new role where I will be working on > other parts of OpenStack. Sadly I do not have the necessary cycles to > maintain my core responsibilities in the neutron community. Nonetheless I > will continue to be involved. > > Thanks > > Gary > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Sep 20 11:24:10 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Thu, 20 Sep 2018 13:24:10 +0200 Subject: [openstack-dev] [neutron] Core status In-Reply-To: References: Message-ID: Hi, Thanks for all Your work for Neutron and good luck in Your new role :) > Wiadomość napisana przez Gary Kotton w dniu 19.09.2018, o godz. 20:19: > > Hi, > I have recently transitioned to a new role where I will be working on other parts of OpenStack. Sadly I do not have the necessary cycles to maintain my core responsibilities in the neutron community. Nonetheless I will continue to be involved. > Thanks > Gary > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From tpb at dyncloud.net Thu Sep 20 12:00:54 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 20 Sep 2018 08:00:54 -0400 Subject: [openstack-dev] [manila] manila core team cleanup Message-ID: <20180920120054.tx4tdyxyd5beu6rp@barron.net> Mark Sturdevant recently contacted me to say that due to changes in his job responsibilities he isn't currently able to stay sufficiently involved in the manila project to serve as a core reviewer. Thanks to Mark for his great service in the past! We'd love to have him back if things change. -- Tom Barron (tbarron) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jaosorior at redhat.com Thu Sep 20 12:35:29 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Thu, 20 Sep 2018 15:35:29 +0300 Subject: [openstack-dev] [tripleo] Clearing up the gate Message-ID: We've been having a lot of timeouts and the gate is stacking up. I'm purging out patches from the gate in order to reduce used resources while this is sorted out. Please do not merge patches until this issue is sorted out. BR From lbragstad at gmail.com Thu Sep 20 12:37:35 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 20 Sep 2018 07:37:35 -0500 Subject: [openstack-dev] [keystone] noop role in openstack In-Reply-To: <13ed348c-d81c-55a1-1965-e5a89e5671a0@catalyst.net.nz> References: <13ed348c-d81c-55a1-1965-e5a89e5671a0@catalyst.net.nz> Message-ID: On Thu, Sep 20, 2018 at 12:22 AM Adrian Turjak wrote: > For Adam's benefit continuing this a bit in email: > > regarding the noop role: > > > http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-09-20.log.html#t2018-09-20T04:13:43 > > The first benefit of such a role (in the given policy scenario) is that > you can now give a user explicit scope on a project (but they can't do > anything) and then use that role for Swift ACLs with full knowledge they > can't do anything other than auth, scope to the project, and then > whatever the ACLs let them do. An example use case being: "a user that > can ONLY talk to a specific container and NOTHING else in OpenStack or > Swift" which is really useful if you want to use a single project for a > lot of websites, or backups, or etc. > > Or in my MFA case, a role I can use when wanting a user to still be able > to auth and setup their MFA, but not actually touch any resources until > they have MFA setup at which point you give them back their real member > role. > > It all relies on leaving no policy rules 'empty' unless those rules (and > their API) really are safe for a noop role. And by empty I don't mean > empty, really I mean "any role on a project". Because that's painful to > then work with. > > With the default policies in Nova (and most other projects), you can't > actually make proper use of Swift ACLs, because having any role on a > project gives you access to all the resources. Like say: > https://github.com/openstack/nova/blob/master/nova/policies/base.py#L31 > > ^ that rule implies, if you are scoped to the project, don't care about > the role, you can do anything to the resources. That doesn't work for > anything role specific. Such rules would need to be: > "is_admin:True or (role:member and project_id:%(project_id)s)" > > If we stop with this assumption that "any role" on a project works, > suddenly policy becomes more powerful and the roles are actually useful > beyond admin vs not admin. System scope will help, but then we'll still > only have system scope, admin on a project, and not admin on a project, > which still makes the role mostly pointless. > Kind of. System-scope is only half the equation for fixing RBAC because it gives developers an RBAC target that isn't project-scoped that they can use to protect APIs with. When you combine that with default roles (admin, member, and reader) [0] then you can start building a matrix, per se. [0] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html > > We as a community need to stop with this assumption (that "any role" on > a project works), because it hurts us in regards to actually useful > RBAC. Yes deployers can edit the policy to avoid the any role on a > project issue (we have), but it's a huge amount of work to figure out > that we could all work together and fix upstream. > As I'm sure you know, even rolling custom policy files might not be enough. Despite an override, there are APIs that still check for 'admin' roles. > > Part of that work is actually happening. With the default roles that > Keystone is defining, and system scope. We can then start updating all > the project default policies to actually require those roles explicitly, > but that effort, needs us to get everyone on board... > That's the idea. We're trying to build that out in keystone now so that other projects have a template to follow. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Thu Sep 20 12:46:34 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Thu, 20 Sep 2018 12:46:34 +0000 Subject: [openstack-dev] [monasca] Berlin Forum brainstorming Message-ID: Hi everyone, As discussed in the team meeting yesterday I've created a brainstorming etherpad [1] for the Forum in Berlin. Please add your ideas and add your name to the list if you're interested in participating. Thanks Witek [1] https://etherpad.openstack.org/p/berlin-monasca-forum-brainstorming From mtreinish at kortar.org Thu Sep 20 13:38:21 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Thu, 20 Sep 2018 09:38:21 -0400 Subject: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper" In-Reply-To: <2288307.xHQO2eDbp7@whitebase.usersys.redhat.com> References: <165f4bed73b.11e3442fc237712.4614262941605225022@ghanshyammann.com> <20180920024720.GA18981@zeong> <2288307.xHQO2eDbp7@whitebase.usersys.redhat.com> Message-ID: <20180920133821.GA24487@zeong> On Thu, Sep 20, 2018 at 10:55:31AM +0200, Luigi Toscano wrote: > On Thursday, 20 September 2018 04:47:20 CEST Matthew Treinish wrote: > > On Thu, Sep 20, 2018 at 11:11:12AM +0900, Ghanshyam Mann wrote: > > > ---- On Wed, 19 Sep 2018 23:29:46 +0900 Monty Taylor > > > wrote ----> > > > > On 09/19/2018 09:23 AM, Monty Taylor wrote: > > > > > On 09/19/2018 08:25 AM, Chris Dent wrote: > > > > >> I have a patch in progress to add some simple integration tests to > > > > >> > > > > >> placement: > > > > >> https://review.openstack.org/#/c/601614/ > > > > >> > > > > >> They use https://github.com/cdent/gabbi-tempest . The idea is that > > > > >> the method for adding more tests is to simply add more yaml in > > > > >> gate/gabbits, without needing to worry about adding to or think > > > > >> about tempest. > > > > >> > > > > >> What I have at that patch works; there are two yaml files, one of > > > > >> which goes through the process of confirming the existence of a > > > > >> resource provider and inventory, booting a server, seeing a change > > > > >> in allocations, resizing the server, seeing a change in allocations. > > > > >> > > > > >> But this is kludgy in a variety of ways and I'm hoping to get some > > > > >> help or pointers to the right way. I'm posting here instead of > > > > >> asking in IRC as I assume other people confront these same > > > > >> confusions. The issues: > > > > >> > > > > >> * The associated playbooks are cargo-culted from stuff labelled > > > > >> > > > > >> "legacy" that I was able to find in nova's jobs. I get the > > > > >> impression that these are more verbose and duplicative than they > > > > >> need to be and are not aligned with modern zuul v3 coolness. > > > > > > > > > > Yes. Your life will be much better if you do not make more legacy > > > > > jobs. > > > > > They are brittle and hard to work with. > > > > > > > > > > New jobs should either use the devstack base job, the > > > > > devstack-tempest > > > > > base job or the devstack-tox-functional base job - depending on what > > > > > things are intended. > > > > > > +1. All the base job from Tempest and Devstack (except grenade which is in > > > progress) are available to use as base for legacy jobs. Using > > > devstack-temepst in your patch is right things. In addition, you need to > > > mention the tox_envlist as all-plugins to make tempest_test_regex work. I > > > commented on review. > > No, all-plugins is incorrect and should never be used. It's only there for > > legacy support, it is deprecated and I thought we pushed a patch to > > indicating that (but I can't find it). > > This one? > https://review.openstack.org/#/c/543974/ > Yep, that's the one I was thinking of. Thanks, Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From cloudsuhartono at gmail.com Thu Sep 20 13:48:53 2018 From: cloudsuhartono at gmail.com (Cloud Suhartono) Date: Thu, 20 Sep 2018 20:48:53 +0700 Subject: [openstack-dev] [OpenStack-I18n] [docs][i18n][ptg] Stein PTG Summary In-Reply-To: <44520f86-4807-398a-fa48-94230d2d3673@gmail.com> References: <20180919163703.32ec748555e59bd2984a542e@redhat.com> <44520f86-4807-398a-fa48-94230d2d3673@gmail.com> Message-ID: Hi Ian, You show us the Indonesian translations at the [1][2][3][4][5]. Thank you very much /Suhartono [1] https://www.openstack.org/user-survey/survey-2018/landing?lang=id_ID [2] https://www.openstack.org/edge-computing/cloud-edge-computing-beyond-the-data-center [3] https://www.openstack.org/containers/leveraging-containers-and-openstack/ [4] https://docs.openstack.org/id/ [5] https://docs.openstack.org/i18n/latest/id/release_management.html Pada tanggal Kam, 20 Sep 2018 pukul 11.09 Ian Y. Choi menulis: > Thanks a lot for nice summary on Docs part especially! > I would like to add an additional summary with more context from I18n > perspective. > > Note: I mainly participated in Docs/I18n discussion only on Monday & > Tuesday > (not available during Wed - Fri due to conflicts with other work in my > country), > and my summary would be different from current I18n PTL if he have > parcipated in Stein PTG, > but I would like to summarize as I18n ex-PTL (Ocata, Pike) and as one of > active partcipants in Docs/I18n discussion. > > Documentation & I18n teams started to have collaborative discussions > from Pike PTG. > Following with Queens & Rocky cycle, I am so happy that Documentation & > I18n teams had tight collaboration > again at Stein PTG for horizontal discussion with sharing issues and > tight collaboration. > > More details for I18n issues are available at the bottom part ("i18n > Topics") in: > https://etherpad.openstack.org/p/docs-i18n-ptg-stein > > PROJECT DOCUMENTATION TRANSLATION SUPPORT > > This year, I18n team actively started to support project documentation > translation [1] and there had progress > on defining documentation translation targets, generatepot infra jobs, > and translation sync on from project repositories to > Zanata for translation sources & from Zanata to project repositories for > translated strings. > [2] and [3] are parts of work I18n team did on previous cycle, and the > final part would be how to support translated documentation publication > aligned with Documentation team, since PDF support implementation is > also related with how to publish PDF files for project repositories. > > Although there were so nice discussion during last Vancouver Summit [4], > more generic idea on infra side how to better support translated > documentation & PDF builds and translation > would be needed after some changes on Project Testing Interface which is > used for project documentation builds [5]. > > [6] is a nice summary from Doug (really appreciated!) for the direction > and plans on PDF and translation builds > by making use of openstack-tox-docs job [7], and I18n team would like to > continue to work with Documentation > and Infrastructure teams on actual implementation suring Stein cycle. > > USER SURVEY, TRANSLATING WHITEPAPERS, AND RECOGNITION ON TRANSLATORS > > With nice collaboration between Foundation and I18n team, I18n team > started to translate > OpenStack user survey [8] after initial discussion on Pike PTG and, edge > computing whitepaper [9], > and container whitepaer [10] into multiple languages with many language > coordinators and translators. > > Those translation effort might be different from Active Technical > Contributor (ATC) recognition > which translators also got for OpenStack project translation and > techical documentation translation [11]. > Some translators shared that they would be interested in translating > technical documents but not interested in > OpenStack user survey and other non-technical documents. > > I thought that it might be due to different governance (Foundation-led & > official projects with technical committee might be different), > and Active User Contributor (AUC) [12] recognition might be a good idea. > Although I could not discuss in details with User Committee members > during PTG, > Foundation agreed that AUC recognition for such translators would be a > good idea and Melvin, > one of user committee agreed with the idea during a very short > conversation. > In my opinion, it would take some time for more discussion and agreement > on detail criteria > (e.g., which number of words might be good aligning with current AUC > recognition criteria), User Committee, and Foundation), > but let's try to move forward on this :) > > Also, documentation on detail steps and activities with user survey on > further years and more on whitepapers > would be important, so I18n team will more document how I18n team > members do action with steps like [13]. > > And some translators shared concerns what there is no I18n in OpenStack > project navigator and map. > It would be also related with recognition on what translators contributed. > Foundation explained that it might be the intention of the purpose of > each artifact > (e.g., OpenStack were designed to show OpenStack components and how > those components interact each other with technical perspective), > and Foundation shared that Foundation would do best to aggregate > scattered definitions (e.g., [14], [15], and somewhere more) > for consistency, and find a nice place for Docs, i18n, Congress, etc... > on the Software/Project Navigator. > > TRANSLATING CONTRIBUTOR GUIDE WITH FIRST CONTACT SIG > > The detail summary is in [16]. To summarize, > - I shared I18n process to First Contact SIG members. Participants > acknowledged that > setting *translation period* would be needed but starting from > initial translation process > would be a good idea since it is not translated yet > - Participants think that user groups would be interested in > translating contributor guide. > - Me will setup translation sync and publication on contributor guide > and Kendall Nelson would kindly > help explain the background of adding contributor guide for > translator target. > > I18N-ING STORYBOARD [17] > > Although I18n team now uses Launchpad [18] for task management, > I think there are mix on multiple language issues on "bugs" section, > which prevents from maintance > by language coordinators. After discussion on I18n mailing list [19], I > thought that it would be very grat > if Storyboard is internationalized so that I18n team can migrate from > Launchpad to Storyboard and > more global contributors access to Storyboard with their native language. > > One issue regarding I18n-ing Storyboard was that story description for > most projects needs to be written > in English, but I18n-ing Storyboard might mislead contributors to write > description in non-English. > Participants thought that adding a warning message on posting pages > (e.g., posting a Story) would solve this issue. > > Participants shared current implementation technologies used in > Storyboard and other dashboard projects > such as Horizon and TripleO UI. Brainstorming and making use of I18n-ing > libraries on Storyboard is an action item at current point. > > PROCESS ON OPENSTACK-HELM DOC FROM NON-ENGLISH TO ENGLISH* > * > I participated in a brief discussion with OpenStack-Helm team for > documentation contribution from Korean to English. > Current documentation and I18n process starts from English to multiple > languages, but some active contributors in OpenStack-Helm team > would like to contribute to Korean documentation first and translate > from Korean into English. > My suggestion is to keep current documentation and I18n process but add > more additional steps before English rst-based documentation. > For example, if OpenStack-Helm team cores agree to have > *doc-ko_KR/source* folder, contributing Korean documentation is limited > to the folder, > Zanata helps to translate from Korean to English on the folder, > translated po can make English rsts, and translated rst files are added > to *doc/source* folder, > there might be no affect on existing documentation & I18n process and > Korean documentation contrubution and translation from Korean to English > would be encouraged. > > OpenStack-Helm team agreed with this step, and I think this would be a > good step for more Internationalization as a pilot project first but > this can be extended > to more projects in future. > > > This is my summary on I18n issues discussed during Stein PTG. > Thanks a lot for all members who involve and help Documentation and I18n. > Although there were no discussion on setting translation plans and > priorities, let's do that through [21] with Frank :) > > And finally, more pictures are available through [22] :) > > > With many thanks, > > /Ian > > [1] > > https://blueprints.launchpad.net/openstack-i18n/+spec/project-doc-translation-support > [2] https://review.openstack.org/#/c/545377/ > [3] https://review.openstack.org/#/c/581000/ > [4] https://etherpad.openstack.org/p/YVR-docs-support-pdf-and-translation > [5] https://review.openstack.org/#/c/580495/ > [6] > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134609.html > [7] > > https://docs.openstack.org/infra/openstack-zuul-jobs/jobs.html#job-openstack-tox-docs > [8] https://www.openstack.org/user-survey > [9] > > https://www.openstack.org/edge-computing/cloud-edge-computing-beyond-the-data-center > [10] > https://www.openstack.org/containers/leveraging-containers-and-openstack/ > [11] > > https://docs.openstack.org/i18n/latest/official_translator.html#atc-status-in-i18n-project > [12] > > https://governance.openstack.org/uc/reference/charter.html#active-user-contributors-auc > [13] https://docs.openstack.org/i18n/latest/release_management.html > [14] > > https://git.openstack.org/cgit/openstack/openstack-manuals/tree/www/project-data/rocky.yaml > [15] > https://github.com/ttx/openstack-map/blob/master/openstack_components.yaml > [16] https://etherpad.openstack.org/p/FC_SIG_ptg_stein > [17] https://etherpad.openstack.org/p/sb-stein-ptg-planning > [18] https://launchpad.net/openstack-i18n > [19] > > http://lists.openstack.org/pipermail/openstack-i18n/2018-September/003307.html > [20] > http://lists.openstack.org/pipermail/openstack/2018-September/046937.html > [21] > > http://lists.openstack.org/pipermail/openstack-i18n/2018-September/003314.html > [22] > > https://www.dropbox.com/sh/2pmvfkstudih2wf/AAAG2c6C_OXorMRFH66AvboYa/Docs%20%2B%20i18n > > > Petr Kovar wrote on 9/20/2018 8:37 AM: > > Hi all, > > > > Just wanted to share a summary of docs- and i18n-related meetings > > and discussions we had in Denver last week during the Stein Project > > Teams Gathering. > > > > The overall schedule for all our sessions with additional comments and > > meeting minutes can be found here: > > > > https://etherpad.openstack.org/p/docs-i18n-ptg-stein > > > > Our obligatory team picture (with quite a few members missing) can be > > found here (courtesy of Foundation folks): > > > > https://pmkovar.fedorapeople.org/DSC_4422.JPG > > > > To summarize what I found most important: > > > > OPS DOCS > > > > We met with the Ops community to discuss the future of Ops docs. The plan > > is for the Ops group to take ownership of the operations-guide (done), > > ha-guide (in progress), and the arch-design guide (to do). > > > > These three documents are being moved from openstack-manuals to their own > > repos, owned by the newly formed Operations Documentation SIG. > > > > See also > > > https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018-operations-guide > > for more notes. > > > > DOCS SITE & DESIGN > > > > We discussed improving the site navigation, guide summaries (particularly > > install-guide), adding a new index page for project team contrib guides, > and > > more. We met with the Foundation staff to discuss the possibility of > getting > > assistance with site design work. > > > > We are also looking into accepting contributions from the Strategic Focus > > Areas folks to make parts of the docs toolchain like openstackdocstheme > more > > easily reusable outside of the official OpenStack infrastructure. > > > > We got feedback on front page template for project team docs, with Ironic > > being the pilot for us. > > > > We got input on restructuring and reworking specs site to make it easier > > for users to understand that specs are not feature descriptions nor > project > > docs, and to make it more consistent in how the project teams publish > their > > specs. This will need to be further discussed with the folks owning the > > specs site infra. > > > > Support status badges showing at the top of docs.o.o pages may not work > well > > for projects following the cycle-with-intermediary release model, such as > > Swift. We need to rethink how we configure and present the badges. > > > > There are also some UX bugs present for the badges > > (https://bugs.launchpad.net/openstack-doc-tools/+bug/1788389). > > > > TRANSLATIONS > > > > We met with the infra team to discuss progress on translating project > team > > docs and, related to that, PDF builds. > > > > With the Foundation staff, we discussed translating Edge and Container > > whitepapers and related material. > > > > REFERENCE, REST API DOCS AND RELEASE NOTES > > > > With the QA team, we discussed the scope and purpose of the > > /doc/source/reference documentation area in project docs. Because the > > scope of /reference might be unclear and used inconsistently by project > > teams, the suggestion is to continue with the original migration plan and > > migrate REST API and possibly Release Notes under /doc/source, as > described > > in https://docs.openstack.org/doc-contrib-guide/project-guides.html. > > > > CONTRIBUTOR GUIDE > > > > The OpenStack Contributor Guide was discussed in a separate session, see > > https://etherpad.openstack.org/p/FC_SIG_ptg_stein for notes. > > > > THAT'S IT? > > > > Please add to the list if I missed anything important, particularly for > > i18n. > > > > Thank you to everybody who attended the sessions, and a special thanks > goes > > to all the PTG organizers! > > > > Cheers, > > pk > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-I18n mailing list > OpenStack-I18n at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Sep 20 14:19:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 20 Sep 2018 09:19:46 -0500 Subject: [openstack-dev] Forum Topic Submission Period In-Reply-To: <5B9FD2BB.3060806@openstack.org> References: <5B9FD2BB.3060806@openstack.org> Message-ID: <51580429-12ad-04b8-0efa-e11a14eaa87b@gmail.com> On 9/17/2018 11:13 AM, Jimmy McArthur wrote: > The Forum Topic Submission session started September 12 and will run > through September 26th.  Now is the time to wrangle the topics you > gathered during your Brainstorming Phase and start pushing forum topics > through. Don't rely only on a PTL to make the agenda... step on up and > place the items you consider important front and center. > > As you may have noticed on the Forum Wiki > (https://wiki.openstack.org/wiki/Forum), we're reusing the normal CFP > tool this year. We did our best to remove Summit specific language, but > if you notice something, just know that you are submitting to the > Forum.  URL is here: > > https://www.openstack.org/summit/berlin-2018/call-for-presentations > > Looking forward to seeing everyone's submissions! > > If you have questions or concerns about the process, please don't > hesitate to reach out. Another question. In the before times, when we just had that simple form to submit forum sessions and then the TC/UC/Foundation reviewed the list and picked the sessions, it was very simple to see what other sessions were proposed and say, "oh good someone is covering this already, I don't need to worry about it". With the move to the CFP forms like the summit sessions, that is no longer available, as far as I know. There have been at least a few cases this week where someone has said, "this might be a good topic, but keystone is probably already covering it, or $FOO SIG is probably already covering it", but without herding the cats to ask and find out who is all doing what, it's hard to know. Is there some way we can get back to having a public view of what has been proposed for the forum so we an avoid overlap, or at worst not proposing something because people assume someone else is going to cover it? -- Thanks, Matt From haleyb.dev at gmail.com Thu Sep 20 14:46:56 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Thu, 20 Sep 2018 10:46:56 -0400 Subject: [openstack-dev] [neutron] Core status In-Reply-To: References: Message-ID: <199e1c34-b3a3-f5b5-9ecc-171ca9d83c2b@gmail.com> On 09/19/2018 02:19 PM, Gary Kotton wrote: > Hi, > > I have recently transitioned to a new role where I will be working on > other parts of OpenStack. Sadly I do not have the necessary cycles to > maintain my core responsibilities in the neutron community. Nonetheless > I will continue to be involved. Thanks for all your work over the years, especially in keeping the reviews moving along on the neutron stable branches. Good luck in your new role! -Brian From mriedemos at gmail.com Thu Sep 20 15:00:54 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 20 Sep 2018 10:00:54 -0500 Subject: [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: Message-ID: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> On 9/20/2018 4:16 AM, John Garbutt wrote: > Following on from the PTG discussions, I wanted to bring everyone's > attention to Nova's plans to deprecate ComputeCapabilitiesFilter, > including most of the the integration with Ironic Capabilities. > > To be specific, this is my proposal in code form: > https://review.openstack.org/#/c/603102/ > > Once the code we propose to deprecate is removed we will stop using > capabilities pushed up from Ironic for 'scheduling', but we would still > pass capabilities request in the flavor down to Ironic (until we get > some standard traits and/or deploy templates sorted for things like UEFI). > > Functionally, we believe all use cases can be replaced by using the > simpler placement traits (this is more efficient than post placement > filtering done using capabilities): > https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/ironic-driver-traits.html > > Please note the recent addition of forbidden traits that helps improve > the usefulness of the above approach: > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-forbidden-traits.html > > For example, a flavor request for GPUs >= 2 could be replaced by a > custom trait trait that reports if a given Ironic node has > CUSTOM_MORE_THAN_2_GPUS. That is a bad example (longer term we don't > want to use traits for this, but that is a discussion for another day) > but it is the example that keeps being raised in discussions on this topic. > > The main reason for reaching out in this email is to ask if anyone has > needs that the ResourceClass and Traits scheme does not currently > address, or can think of a problem with a transition to the newer approach. I left a few comments in the change, but I'm assuming as part of the deprecation we'd remove the filter from the default enabled_filters list so new installs don't automatically get warnings during scheduling? Another thing is about existing flavors configured for these capabilities-scoped specs. Are you saying during the deprecation we'd continue to use those even if the filter is disabled? In the review I had suggested that we add a pre-upgrade check which inspects the flavors and if any of these are found, we report a warning meaning those flavors need to be updated to use traits rather than capabilities. Would that be reasonable? -- Thanks, Matt From jimmy at openstack.org Thu Sep 20 15:23:09 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 20 Sep 2018 10:23:09 -0500 Subject: [openstack-dev] Forum Topic Submission Period In-Reply-To: <51580429-12ad-04b8-0efa-e11a14eaa87b@gmail.com> References: <5B9FD2BB.3060806@openstack.org> <51580429-12ad-04b8-0efa-e11a14eaa87b@gmail.com> Message-ID: <5BA3BB5D.3060404@openstack.org> Matt, Another good question... Matt Riedemann wrote: > On 9/17/2018 11:13 AM, Jimmy McArthur wrote: >> SNIP > > Another question. In the before times, when we just had that simple > form to submit forum sessions and then the TC/UC/Foundation reviewed > the list and picked the sessions, it was very simple to see what other > sessions were proposed and say, "oh good someone is covering this > already, I don't need to worry about it". With the move to the CFP > forms like the summit sessions, that is no longer available, as far as > I know. There have been at least a few cases this week where someone > has said, "this might be a good topic, but keystone is probably > already covering it, or $FOO SIG is probably already covering it", but > without herding the cats to ask and find out who is all doing what, > it's hard to know. > > Is there some way we can get back to having a public view of what has > been proposed for the forum so we an avoid overlap, or at worst not > proposing something because people assume someone else is going to > cover it? This is basically the CFP equivalent: https://www.openstack.org/summit/berlin-2018/vote-for-speakers Voting isn't necessary, of course, but it should allow you to see submissions as they roll in. Does this work for your purposes? Thanks, Jimmy From mriedemos at gmail.com Thu Sep 20 16:27:25 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 20 Sep 2018 11:27:25 -0500 Subject: [openstack-dev] Forum Topic Submission Period In-Reply-To: <5BA3BB5D.3060404@openstack.org> References: <5B9FD2BB.3060806@openstack.org> <51580429-12ad-04b8-0efa-e11a14eaa87b@gmail.com> <5BA3BB5D.3060404@openstack.org> Message-ID: On 9/20/2018 10:23 AM, Jimmy McArthur wrote: > This is basically the CFP equivalent: > https://www.openstack.org/summit/berlin-2018/vote-for-speakers  Voting > isn't necessary, of course, but it should allow you to see submissions > as they roll in. > > Does this work for your purposes? Yup, that should do it, thanks! -- Thanks, Matt From fungi at yuggoth.org Thu Sep 20 16:32:49 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 20 Sep 2018 16:32:49 +0000 Subject: [openstack-dev] [all] We're combining the lists! (was: Bringing the community together...) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> tl;dr: The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists (to which this is being sent) will be replaced by a new openstack-discuss at lists.openstack.org mailing list. The new list is open for subscriptions[0] now, but is not yet accepting posts until Monday November 19 and it's strongly recommended to subscribe before that date so as not to miss any messages posted there. The old lists will be configured to no longer accept posts starting on Monday December 3, but in the interim posts to the old lists will also get copied to the new list so it's safe to unsubscribe from them any time after the 19th and not miss any messages. Now on to the details... The original proposal[1] I cross-posted to these lists in August received overwhelmingly positive feedback (indeed only one strong objection[2] was posted, thanks Thomas for speaking up, and my apologies in advance if this makes things less convenient for you), which is unusual since our community usually tends to operate on silent assent and tacit agreement. Seeing what we can only interpret as majority consensus for the plan among the people reading messages posted to these lists, a group of interested individuals met last week in the Infrastructure team room at the PTG to work out the finer details[3]. We devised a phased timeline: During the first phase (which begins with this announcement) the new openstack-discuss mailing list will accept subscriptions but not posts. Its short and full descriptions indicate this, as does the welcome message sent to all new subscribers during this phase. The list is configured for "emergency moderation" mode so that all posts, even those from subscribers, immediately land in the moderation queue and can be rejected with an appropriate message. We strongly recommend everyone who is on any of the current general openstack, openstack-dev, openstack-operators and openstack-sigs lists subscribe to openstack-discuss during this phase in order to avoid missing any messages to the new list. Phase one lasts roughly one month and ends on Monday November 19, just after the OpenStack Stein Summit in Berlin. The second phase picks up at the end of the first. During this phase, emergency moderation is no longer in effect and subscribers can post to the list normally (non-subscribers are subject to moderation of course in order to limit spam). Any owners/moderators from the original lists who wish it will be added to the new one to collaborate on moderation tasks. At this time the openstack-discuss list address itself will be subscribed to posts from the openstack, openstack-dev, openstack-operators and openstack-sigs mailing lists so anyone who wishes to unsubscribe from those can do so at any time during this phase without missing any replies sent there. The list descriptions and welcome message will also be updated to their production prose. Phase two runs for two weeks ending on Monday December 3. The third and final phase begins at the end of the second, when further posts to the general openstack, openstack-dev, openstack-operators and openstack-sigs lists will be refused and the descriptions for those lists updated to indicate they're indefinitely retired from use. The old archives will still be preserved of course, but no new content will appear in them. A note about DMARC/DKIM: during the planning discussion we also spoke briefly about the problems we encounter on the current lists whereby subscriber MTAs which check DKIM signatures appearing in some posts reject them and cause those subscribers to get unsubscribed after too many of these bounces. While reviewing the various possible mitigation options available to us, we eventually resolved that the least objectionable solution was to cease modifying the list subject and body. As such, for the new openstack-discuss list you won't see [openstack-discuss] prepended to message subjects, and there will be no list footer block added to the message body. Rest assured the usual RFC 2369 List-* headers[4] will still be added so MUAs can continue to take filtering actions based on them as on our other lists. I'm also including a couple of FAQs which have come up over the course of this... Why make a new list instead of just directing people to join an existing one such as the openstack general ML? For one, the above list behavior change to address DMARC/DKIM issues is a good reason to want a new list; making those changes to any of the existing lists is already likely to be disruptive anyway as subscribers may be relying on the subject mangling for purposes of filtering list traffic. Also as noted earlier in the thread for the original proposal, we have many suspected defunct subscribers who are not bouncing (either due to abandoned mailboxes or MTAs black-holing them) so this is a good opportunity to clean up the subscriber list and reduce the overall amount of E-mail unnecessarily sent by the server. Why not simply auto-subscribe everyone from the four older lists to the new one and call it a day? Well, I personally would find it rude if a list admin mass-subscribed me to a mailing list I hadn't directly requested. Doing so may even be illegal in some jurisdictions (we could probably make a case that it's warranted, but it's cleaner to not need to justify such an action). Much like the answer to the previous question, the changes in behavior (and also in the list name itself) are likely to cause lots of subscribers to need to update their message filtering rules anyway. I know by default it would all start landing in my main inbox, and annoy me mightily. What subject tags are we going to be using to identify messages of interest and to be able to skip those we don't care about? We're going to continuously deploy a list of recommended subject tags in a visible space, either on the listserv's WebUI or the Infra Manual and link to it liberally. There is already an initial set of suggestions[5] being brainstormed, so feel free to add any there you feel might be missing. It's not yet been decided whether we'll also include these in the Mailman "Topics" configuration to enable server-side filtering on them (as there's a good chance we'll be unable to continue supporting that after an upgrade to Mailman 3), so for now it's best to assume you may need to add them to your client-side filters if you rely on that capability. If you have any further questions, please feel free to respond to this announcement so we can make sure they're answered. [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-sigs/2018-August/000493.html [2] http://lists.openstack.org/pipermail/openstack-dev/2018-August/134074.html [3] https://etherpad.openstack.org/p/infra-ptg-denver-2018 [4] https://www.ietf.org/rfc/rfc2369.txt [5] https://etherpad.openstack.org/p/common-openstack-ml-topics -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dmellado at redhat.com Thu Sep 20 16:33:33 2018 From: dmellado at redhat.com (Daniel Mellado) Date: Thu, 20 Sep 2018 18:33:33 +0200 Subject: [openstack-dev] =?utf-8?q?Nominating_Luis_Tom=C3=A1s_Bol=C3=ADvar?= =?utf-8?q?_for_kuryr-kubernetes_core?= Message-ID: <6a804d75-90f9-c561-5891-7b89f8fa9cb2@redhat.com> Hi All, Id like to nominate Luis Tomás for Kuryr-Kubernetes core. He has been contributing to the project development with both features and quality reviews at core reviewer level, as well as being the stable branch liaison keeping on eye on every needed backport and bug and fighting and debugging lbaas issues. Please follow up with a +1/-1 to express your support, even if he makes the worst jokes ever! Thanks! Daniel -------------- next part -------------- A non-text attachment was scrubbed... Name: 0x13DDF774E05F5B85.asc Type: application/pgp-keys Size: 2208 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From miguel at mlavalle.com Thu Sep 20 16:34:56 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 20 Sep 2018 11:34:56 -0500 Subject: [openstack-dev] [neutron] Core status In-Reply-To: <199e1c34-b3a3-f5b5-9ecc-171ca9d83c2b@gmail.com> References: <199e1c34-b3a3-f5b5-9ecc-171ca9d83c2b@gmail.com> Message-ID: Hi Gary, As I say during our private conversation, we don't like to see you going, but we understand that you have other career opportunities. I wish you luck in your next challenge and please remember that you will always be welcome here Best regards Miguel On Thu, Sep 20, 2018 at 9:49 AM Brian Haley wrote: > On 09/19/2018 02:19 PM, Gary Kotton wrote: > > Hi, > > > > I have recently transitioned to a new role where I will be working on > > other parts of OpenStack. Sadly I do not have the necessary cycles to > > maintain my core responsibilities in the neutron community. Nonetheless > > I will continue to be involved. > > Thanks for all your work over the years, especially in keeping the > reviews moving along on the neutron stable branches. Good luck in your > new role! > > -Brian > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Sep 20 17:04:34 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 20 Sep 2018 12:04:34 -0500 Subject: [openstack-dev] [release] Release countdown for week R-28 and R-27, September 24 - October 5 Message-ID: <20180920170434.GA4024@sm-workstation> Welcome to the release countdown email. I am going to keep these biweekly for now since there are not as many critical deadlines, but please let me know if any more frequent or additional updates would be useful. Development Focus ----------------- Team focus should be on spec approval and implementation of priority features. Action Required -------------- Matt Riedemann started a thread about the Ocata release entering the new "extended maintenance" phase: http://lists.openstack.org/pipermail/openstack-dev/2018-September/134810.html We are actually past the published date for that transition, but this is the first time we've done this, so I think we're all working through the process. Matt raises the good point that any teams that have merged patches for stable/ocata that would like to have those officially available should do a final stable release off of stable/ocata. After that, as part of extended maintenance, teams can choose how they handle further stable backports, but there will no longer we official releases from stable/ocata. General Information ------------------- Please be aware of the project specific deadlines that vary slightly from the overall release schedule [1]. Teams should now be making progress towards the cycle goals [2]. Please prioritize reviews for these appropriately. [1] https://releases.openstack.org/stein/schedule.html [2] https://governance.openstack.org/tc/goals/stein/index.html If your project has a library that is still a 0.x release, start thinking about when it will be appropriate to do a 1.0 version. The version number does signal the state, real or perceived, of the library, so we strongly encourage going to a full major version once things are in a good and usable state. PTLs and/or release liaisons - we are still a little ways out from the first milestone, but a reminder that we would love to have you around during our weekly meeting [3]. It would also be very helpful if you would linger in the #openstack-release channel during deadline weeks. [3] http://eavesdrop.openstack.org/#Release_Team_Meeting Upcoming Deadlines & Dates -------------------------- Stein-1 milestone: October 25 (R-24 week) Forum at OpenStack Summit in Berlin: November 13-15 -- Sean McGinnis (smcginnis) From elod.illes at ericsson.com Thu Sep 20 17:08:30 2018 From: elod.illes at ericsson.com (=?UTF-8?B?RWzDtWQgSWxsw6lz?=) Date: Thu, 20 Sep 2018 19:08:30 +0200 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: Message-ID: Hi Matt, About 1.: I think it is a good idea to cut a final release (especially as some vendor/operator would be glad even if there would be some release in Extended Maintenance, too, what most probably won't happen...) -- saying that without knowing how much of a burden would it be for projects to do this final release... After that it sounds reasonably to tag the branches EM (as it is written in the mentioned resolution). Do you have any plan about how to coordinate the 'final releases' and do the EM-tagging? Thanks for raising these questions! Cheers, Előd On 2018-09-18 21:27, Matt Riedemann wrote: > The release page says Ocata is planned to go into extended maintenance > mode on Aug 27 [1]. There really isn't much to this except it means we > don't do releases for Ocata anymore [2]. There is a caveat that > project teams that do not wish to maintain stable/ocata after this > point can immediately end of life the branch for their project [3]. We > can still run CI using tags, e.g. if keystone goes ocata-eol, devstack > on stable/ocata can still continue to install from stable/ocata for > nova and the ocata-eol tag for keystone. Having said that, if there is > no undue burden on the project team keeping the lights on for > stable/ocata, I would recommend not tagging the stable/ocata branch > end of life at this point. > > So, questions that need answering are: > > 1. Should we cut a final release for projects with stable/ocata > branches before going into extended maintenance mode? I tend to think > "yes" to flush the queue of backports. In fact, [3] doesn't mention > it, but the resolution said we'd tag the branch [4] to indicate it has > entered the EM phase. > > 2. Are there any projects that would want to skip EM and go directly > to EOL (yes this feels like a Monopoly question)? > > [1] https://releases.openstack.org/ > [2] > https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > [3] > https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance > [4] > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life > From ed at leafe.com Thu Sep 20 17:13:57 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 20 Sep 2018 12:13:57 -0500 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: <7110E288-7312-4EB5-A4DF-F83FD722E305@leafe.com> Greetings OpenStack community, This newsletter is very different than the past few, in that there is some actual news. We, as a SIG, have recognized that we have moved into a new phase. With most of the API guidelines that we needed to write having been written, there is not "new stuff" to make demands on our time. In recognition of this, we are changing how we will work. Next Thursday, Sept 27, will be our last formal IRC meeting [7] in #openstack-meeting-3; after that we will switch to an "office hours" format, where API-SIG members will be available in the #openstack-sdks channel. We will have at least two office hours scheduled per week, to allow for more participation across time zones. We will discuss that schedule at next week's meeting, so if you have any preferences on this, either attend that meeting, or reply to this email, so that we can make a schedule that works well for most people. We will also discontinue this weekly newsletter, and instead only send it when something of note needs to be shared with the wider community. On a note, one of the biggest contributors to the SIG, Chris Dent (cdent), has finally realized that he is spread much too thinly these days, and needs to pull back from so many things demanding his time. So while he won't be as active in the group as before, I'm sure he'll be keeping an eye on the rest of us to make sure we don't mess things up too badly. So for all your good work, Chris, we say "Huzzah!". If you're interested in helping out, here are some things to get you started: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * None # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Add an api-design doc with design advice https://review.openstack.org/592003 * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * Version and service discovery series Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-sig,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://storyboard.openstack.org/#!/project/1039 [6] https://git.openstack.org/cgit/openstack/api-sig [7] http://eavesdrop.openstack.org/#API_Special_Interest_Group Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Ed Leafe From ildiko.vancsa at gmail.com Thu Sep 20 19:49:24 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 20 Sep 2018 21:49:24 +0200 Subject: [openstack-dev] [os-upstream-institute][all] Upstream Institute training call for mentors!! Message-ID: <66E49EFB-C494-4BE4-B2C0-119DB60BEE37@gmail.com> Hi, We have two Upstream Institute trainings [1] upcoming and we are looking for mentors to join us and help new contributors to start their journey in OpenStack. We have a training in Stockholm on October 9 and one in Berlin right before the Summit on November 11-12. If you are available for each or both of these occasions and interested to join our crew please sign up on our wiki page [2]. If you have any questions please reply to this thread or reach out to the team on the #openstack-upstream-institute IRC channel. Thanks and Best Regards, Ildikó (IRC: ildikov) [1] https://docs.openstack.org/upstream-training/#upcoming-trainings [2] https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute_Occasions From eumel at arcor.de Thu Sep 20 20:11:29 2018 From: eumel at arcor.de (Frank Kloeker) Date: Thu, 20 Sep 2018 22:11:29 +0200 Subject: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core In-Reply-To: <4f413d36-463e-477a-9886-79bf55df677c@suse.com> References: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> <4f413d36-463e-477a-9886-79bf55df677c@suse.com> Message-ID: <07fcbf71a9406e8d7b918b238377d503@arcor.de> Am 2018-09-19 20:54, schrieb Andreas Jaeger: > On 2018-09-19 20:50, Petr Kovar wrote: >> Hi all, >> >> Based on our PTG discussion, I'd like to nominate Ian Y. Choi for >> membership in the openstack-doc-core team. I think Ian doesn't need an >> introduction, he's been around for a while, recently being deeply >> involved >> in infra work to get us robust support for project team docs >> translation and >> PDF builds. >> >> Having Ian on the core team will also strengthen our integration with >> the i18n community. >> >> Please let the ML know should you have any objections. > > The opposite ;), heartly agree with adding him, > > Andreas ++ Frank From openstack-dev at dseven.org Thu Sep 20 20:14:47 2018 From: openstack-dev at dseven.org (iain macdonnell) Date: Thu, 20 Sep 2018 13:14:47 -0700 Subject: [openstack-dev] [glance] replace locations on queued image Message-ID: I feel like I've been chasing this issue around in circles for months, and I need the core team to make a decision. I cannot proceed with Rocky deployment/upgrade until this gets resolved. It's currently possible to add to, or completely replace, the "locations" for an image with status "queued", using HTTP PATCH. Prior to the following fix, the "add" operation would update the status to "active", but the "replace" operation would leave it in "queued" status, with no way to get it out of that status (other than delete). I thought that we agreed that that was a bug, which I submitted a fix for: https://review.openstack.org/592775 I don't see any API breakage from this fix. The previous state left the image in permanently unusable state. I can't see any valid use-case for that. Now (this morning's meeting) it seems like we're back to debating whether or not it's valid to "replace" "locations" for an image in "queued" status. My interpretation of "replace" is "I want it to look exactly like this, regardless of what's there now", as opposed to "add", which means "append this to whatever is currently there". If it's not valid to "replace" "locations" for an image in "queued" status, I think that the API should reject the request, not leave the image in limbo ('queued') status. I'm OK with that - I can use "add" - but I'll need to update this: https://review.openstack.org/597648 to apply only to "add". glance core team, please make the decision. Thanks, ~iain (slightly frustrated) From mrhillsman at gmail.com Thu Sep 20 20:31:35 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 20 Sep 2018 15:31:35 -0500 Subject: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core In-Reply-To: <07fcbf71a9406e8d7b918b238377d503@arcor.de> References: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> <4f413d36-463e-477a-9886-79bf55df677c@suse.com> <07fcbf71a9406e8d7b918b238377d503@arcor.de> Message-ID: ++ On Thu, Sep 20, 2018 at 3:11 PM Frank Kloeker wrote: > Am 2018-09-19 20:54, schrieb Andreas Jaeger: > > On 2018-09-19 20:50, Petr Kovar wrote: > >> Hi all, > >> > >> Based on our PTG discussion, I'd like to nominate Ian Y. Choi for > >> membership in the openstack-doc-core team. I think Ian doesn't need an > >> introduction, he's been around for a while, recently being deeply > >> involved > >> in infra work to get us robust support for project team docs > >> translation and > >> PDF builds. > >> > >> Having Ian on the core team will also strengthen our integration with > >> the i18n community. > >> > >> Please let the ML know should you have any objections. > > > > The opposite ;), heartly agree with adding him, > > > > Andreas > > ++ > > Frank > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Thu Sep 20 20:46:32 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Thu, 20 Sep 2018 16:46:32 -0400 Subject: [openstack-dev] Fwd: Denver Ops Meetup post-mortem In-Reply-To: References: Message-ID: The issue I ran into with IRC was a bit more obscure. "real IRC" is entirely blocked from all networks provided to me by my employer (even the office wifi). The web interface I was using (irccloud) didn't work for nickname registration either. When trying real (non-web-wrapped) IRC from my laptop via an LTE hotspot it also failed. We eventually worked out that it's because Freenode has blacklisted large IP ranges including my AT&T service. Can't connect unless authenticated, can't register nickname for auth because not connected. The answer in that case is to register the nickname on http://webchat.freenode.net This "chicken and egg" problem is explained here: https://superuser.com/questions/1220409/irc-how-to-register-on-freenode-using-hexchat-when-i-get-disconnected-immediat Chris On Thu, Sep 20, 2018 at 12:18 AM Kendall Nelson wrote: > Hello! > > On Tue, Sep 18, 2018 at 12:36 PM Chris Morgan wrote: > >> >> >> ---------- Forwarded message --------- >> From: Chris Morgan >> Date: Tue, Sep 18, 2018 at 2:13 PM >> Subject: Denver Ops Meetup post-mortem >> To: OpenStack Operators >> >> >> Hello All, >> Last week we had a successful Ops Meetup embedded in the OpenStack >> Project Team Gathering in Denver. >> >> Despite generally being a useful gathering, there were definitely lessons >> learned and things to work on, so I thought it would be useful to share a >> post-mortem. I encourage everyone to share their thoughts on this as well. >> >> What went well: >> >> - some of the sessions were great and a lot of progress was made >> - overall attendance in the ops room was good >> - more developers were able to join the discussions >> - facilities were generally fine >> - some operators leveraged being at PTG to have useful involvement in >> other sessions/discussions such as Keystone, User Committee, Self-Healing >> SIG, not to mention the usual "hallway conversations", and similarly some >> project devs were able to bring pressing questions directly to operators. >> >> What didn't go so well: >> >> - Merging into upgrade SIG didn't go particularly well >> - fewer ops attended (in particular there were fewer from outside the US) >> - Some of the proposed sessions were not well vetted >> - some ops who did attend stated the event identity was diluted, it was >> less attractive >> - we tried to adjust the day 2 schedule to include late submissions, >> however it was probably too late in some cases >> >> I don't think it's so important to drill down into all the whys and >> wherefores of how we fell down here except to say that the ops meetups team >> is a small bunch of volunteers all with day jobs (presumably just like >> everyone else on this mailing list). The usual, basically. >> >> Much more important : what will be done to improve things going forward: >> >> - The User Committee has offered to get involved with the technical >> content. In particular to bring forward topics from other relevant events >> into the ops meetup planning process, and then take output from ops meetups >> forward to subsequent events. We (ops meetup team) have welcomed this. >> >> - The Ops Meetups Team will endeavor to start topic selection earlier and >> have a more critical approach. Having a longer list of possible sessions >> (when starting with material from earlier events) should make it at least >> possible to devise a better agenda. Agenda quality drives attendance to >> some extent and so can ensure a virtuous circle. >> >> - We need to work out whether we're doing fixed schedule events (similar >> to previous mid-cycle Ops Meetups) or fully flexible PTG-style events, but >> grafting one onto the other ad-hoc clearly is a terrible idea. This needs >> more discussion. >> >> - The Ops Meetups Team continues to explore strange new worlds, or at >> least get in touch with more and more OpenStack operators to find out what >> the meetups team and these events could do for them and hence drive the >> process better. One specific work item here is to help the (widely >> disparate) operator community with technical issues such as getting setup >> with the openstack git/gerrit and IRC. The latter is the preferred way for >> the community to meet, but is particularly difficult now with the >> registered nickname requirement. We will add help documentation on how to >> get over this hurdle. >> > > After you get onto freenode at IRC you can register your nickname with a > single command and then you should be able to join any of the channels. The > command you need: ' /msg nickserv register $PASSWORD $EMAIL_ADDRESS'. You > can find more instructions here about setting up IRC[1]. > > If you get stuck or have any questions, please let me know! I am happy to > help with the setup of IRC or gerrit or anything else that might be a > barrier. > > >> - YOUR SUGGESTION HERE >> >> Chris >> >> -- >> Chris Morgan >> >> >> -- >> Chris Morgan >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -Kendall Nelson (diablo_rojo) > > [1] https://docs.openstack.org/contributors/common/irc.html# > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifatafekn at gmail.com Thu Sep 20 21:00:14 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Fri, 21 Sep 2018 00:00:14 +0300 Subject: [openstack-dev] [vitrage][ptg] Vitrage virtual PTG Message-ID: Hi, We will hold the Vitrage virtual PTG on October 10-11th. You are welcome to join and suggest topics for discussion in the PTG etherpad[1]. Thanks, Ifat [1] https://etherpad.openstack.org/p/vitrage-stein-ptg -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Sep 20 21:15:25 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 20 Sep 2018 14:15:25 -0700 Subject: [openstack-dev] [StoryBoard] PTG Summary Message-ID: Hello Lovers of Task Tracking! So! We talked about a lot of things, and I went to a lot of rooms to talk about StoryBoard related things and it was already a week ago so bear with me. We had a lot of good discussions as we were able to include SotK in discussions via videocalling. We also had the privilege of our outreachy intern to come all the way from Cairo to Denver to join us :) Onto the summaries! Story Attachments ============== This topic has started coming up with increasing regularity. Currently, StoryBoard doesn’t support attachments, but it’s a feature that several projects claim to be blocking their migration. The current work around is either to trim down logs and paste the relevant section, or to host the file elsewhere and link to its location. After consulting with the infrastructure team, we concluded that currently, there is no donated storage. The next step is for me to draft a spec detailing our requirements and implementation details and then to include infra on the review to help them have something concrete to go to vendors with. For notes on the proposed method see the etherpad[1]. One other thing discussed during this topic was how we could maybe migrate the current attachments. This isn’t supported by the migration script at this point, but it’s something we could write a separate script for. It should be separate because it would be a painfully slow process and we wouldn’t want to slow down the migration script more than it already is by the Launchpad API. The attachments script would be run after the initial migration; that being said, everything still persists in Launchpad so things can still be referenced there. Handling Duplicate Stories ==================== This is also an ongoing topic for discussion. Duplicate stories if not handled properly could dilute the database as we get more projects migrated over. The plan we settled on is to add a ‘Mark as Duplicate’ button to the webclient and corresponding functions to the API. The user would be prompted for a link to the master story. The master story would get a new timeline event that would have the link to the duplicate and the duplicate story would have all tasks auto marked as invalid (aside from those marked as merged) so that the story then shows as inactive. The duplicate story could also get a timeline event that explains what happened and links to the master story. I’ve yet to create a story for all of this, but it’s on my todo list. Handling Thousands of Comments Per Story ================================== There’s this special flower story[2] that has literally thousands of comments on it because of all of the gerrit comments being added to the timeline for all the patches for all the tasks. Rendering of the timeline portion of the webpage in the webclient is virtually impossible. It will load the tasks and then hang forever. The discussion around this boiled down to this: other task trackers also can’t handle this and there is a better way to divvy up the story into several stories and contain them in a worklist for future, similar work. For now, users can select what they want to load in their timeline views for stories, so by unmarking all of the timeline events in their preferences, the story will load completely sans timeline details. Another solution we discussed to help alleviate the timeline load on stories with lots of tasks is to have a task field that links to the review, rather than a comment from gerrit every time a new patch gets pushed. Essentially we want to focus on cleaning up the timeline rather than just going straight to a pagination type of solution. It was also concluded that we want to add another user preference for page sizes of 1000. Tasks have not been created in the story related to this issue yet[3], but its on my todo list. Project Group Descriptions ===================== There was a request to have project group descriptions, but currently there is nothing in the API handling this. Discussion concluded with agreement that this shouldn’t be too difficult. All that needs to happen is a few additions to the API and the connection to managing group definitions in project-config. I still need to make a story for this. Translating storyboard-webclient ========================= There was an infrastructure mailing list thread a little while back that kicked off discussion on this topic. It was received as an interesting idea and could help with the adoption of StoryBoard outside of OpenStack. The biggest concern was communicating to users that are seeing the webclient rendered in some other language that they still need to create tasks/stories/worklists/boards in English or whatever the default language is for the organization that is hosting StoryBoard. This could be a banner when someone logs in, or something on user’s dashboards. One of the things that needs to happen first is to find libraries for javascript and angular for signaling what strings need to be translated. We didn’t really outline next steps past that as it’s not super high priority, but it’s definitely an effort we would support if someone wanted to start driving it forward. Easier Rollback for Webclient Continuous Deployment ========================================= With the puppet-storyboard module we deploy from tarballs instead of from git right now, and we don't preserve earlier tarballs which makes it difficult to rollback changes when we find issues. There wasn’t a ton of discussion besides, yes we need to figure this out. Pre-zuulv3 we uploaded tarballs with the git sha, if we apply that to publish-openstack-javascript-content, that might help the situation. Managing Project Coresec Groups ========================== The vast majority of work on private stories has been implemented. Stories can be marked as private and users can subscribe other users to those private stories so that only those people can see them. The only convenience that is currently lacking is adding groups of users (manually or automatically if in a template story). Groups of users are currently only managed by StoryBoard admins. We would like to make this managed in a repository or by proxying gerrit group management. This shouldn’t be too complicated a change, it would only require some sort of flag being set for a group definition and then some database migration to sync those groups into the StoryBoard database. If you have opinions on this topic, it’s not all set in stone and we would love to hear your thoughts! Searching ======== It’s become apparent that while the search and type ahead features of StoryBoard work better than most users think at first glance, it’s an issue that users struggle with searching as much as they do. We talked about possible solutions for this aside from writing documentation to cover searching in the webclient. The solution we talked about most was that it might be easier for our users if we used the gerrit query language as that is what the majority of our users are already familiar with. The next step here is to write a spec for using the gerrit query language- or some other language if users disagree about using the gerrit language. Show all OpenStack Repos in StoryBoard? ================================ Are we getting to the point where it would be helpful for the users of StoryBoard to be able to add tasks to stories for all the repos not already migrated to StoryBoard? This would be incredibly helpful for things like release goal tracking where many repos that haven’t been migrated had tasks that were assigned to governance instead of the actual repo so as to be able to track everything in a single story. This is something we will want to take up with the TC during a set of office hours in the next week or so. Summary & Continuing Conversations ============================= My brain is mush. Hopefully I covered the majority of the important topics and did them justice! Anyone that was there, please feel free to correct me. Anyone that wasn’t there that is interested in getting involved with any of this, please join us in #storyboard on IRC or email us with the [Storyboard] tag to the dev or infra mailing lists. We also have weekly meetings[4] on Wednesdays at 19:00 UTC, please join us! I've got a lot of stories to make/update and tasks to add.. Thanks! -Kendall Nelson (diablo_rojo) [1] https://etherpad.openstack.org/p/sb-stein-ptg-planning [2] https://storyboard.openstack.org/#!/story/2002586 [3] https://storyboard.openstack.org/#!/story/2003525 [4] http://eavesdrop.openstack.org/#StoryBoard_Meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Sep 20 21:46:43 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 20 Sep 2018 15:46:43 -0600 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...) In-Reply-To: <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> Message-ID: <1537479809-sup-898@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +0000: > tl;dr: The openstack, openstack-dev, openstack-sigs and > openstack-operators mailing lists (to which this is being sent) will > be replaced by a new openstack-discuss at lists.openstack.org mailing > list. Since last week there was some discussion of including the openstack-tc mailing list among these lists to eliminate confusion caused by the fact that the list is not configured to accept messages from all subscribers (it's meant to be used for us to make sure TC members see meeting announcements). I'm inclined to include it and either use a direct mailing or the [tc] tag on the new discuss list to reach TC members, but I would like to hear feedback from TC members and other interested parties before calling that decision made. Please let me know what you think. Doug From emilien at redhat.com Thu Sep 20 21:49:30 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 20 Sep 2018 17:49:30 -0400 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...) In-Reply-To: <1537479809-sup-898@lrrr.local> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> Message-ID: On Thu, Sep 20, 2018 at 5:47 PM Doug Hellmann wrote: > Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +0000: > > tl;dr: The openstack, openstack-dev, openstack-sigs and > > openstack-operators mailing lists (to which this is being sent) will > > be replaced by a new openstack-discuss at lists.openstack.org mailing > > list. > > Since last week there was some discussion of including the openstack-tc > mailing list among these lists to eliminate confusion caused by the fact > that the list is not configured to accept messages from all subscribers > (it's meant to be used for us to make sure TC members see meeting > announcements). > > I'm inclined to include it and either use a direct mailing or the > [tc] tag on the new discuss list to reach TC members, but I would > like to hear feedback from TC members and other interested parties > before calling that decision made. Please let me know what you think. > +2 , easier to manage, easier to reach out. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From manjeet.s.bhatia at intel.com Thu Sep 20 21:57:38 2018 From: manjeet.s.bhatia at intel.com (Bhatia, Manjeet S) Date: Thu, 20 Sep 2018 21:57:38 +0000 Subject: [openstack-dev] [neutron] Core status In-Reply-To: References: Message-ID: Good luck for new role ! From: Gary Kotton [mailto:gkotton at vmware.com] Sent: Wednesday, September 19, 2018 11:20 AM To: OpenStack List Subject: [openstack-dev] [neutron] Core status Hi, I have recently transitioned to a new role where I will be working on other parts of OpenStack. Sadly I do not have the necessary cycles to maintain my core responsibilities in the neutron community. Nonetheless I will continue to be involved. Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Thu Sep 20 21:59:02 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 20 Sep 2018 16:59:02 -0500 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> Message-ID: I agree all lists should be merged as discussed otherwise why not just leave all things as they are :P On Thu, Sep 20, 2018 at 4:49 PM Emilien Macchi wrote: > > > On Thu, Sep 20, 2018 at 5:47 PM Doug Hellmann > wrote: > >> Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +0000: >> > tl;dr: The openstack, openstack-dev, openstack-sigs and >> > openstack-operators mailing lists (to which this is being sent) will >> > be replaced by a new openstack-discuss at lists.openstack.org mailing >> > list. >> >> Since last week there was some discussion of including the openstack-tc >> mailing list among these lists to eliminate confusion caused by the fact >> that the list is not configured to accept messages from all subscribers >> (it's meant to be used for us to make sure TC members see meeting >> announcements). >> >> I'm inclined to include it and either use a direct mailing or the >> [tc] tag on the new discuss list to reach TC members, but I would >> like to hear feedback from TC members and other interested parties >> before calling that decision made. Please let me know what you think. >> > > +2 , easier to manage, easier to reach out. > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Thu Sep 20 22:30:32 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 20 Sep 2018 17:30:32 -0500 Subject: [openstack-dev] Capturing Feedback/Input Message-ID: Hey everyone, During the TC meeting at the PTG we discussed the ideal way to capture user-centric feedback; particular from our various groups like SIGs, WGs, etc. Options that were mentioned ranged from a wiki page to a standalone solution like discourse. While there is no perfect solution it was determined that Storyboard could facilitate this. It would play out where there is a project group openstack-uc? and each of the SIGs, WGs, etc would have a project under this group; if I am wrong someone else in the room correct me. The entire point is a first step (maybe final) in centralizing user-centric feedback that does not require any extra overhead be it cost, time, or otherwise. Just kicking off a discussion so others have a chance to chime in before anyone pulls the plug or pushes the button on anything and we settle as a community on what makes sense. -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Thu Sep 20 22:40:56 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 21 Sep 2018 06:40:56 +0800 Subject: [openstack-dev] [Openstack-sigs] Capturing Feedback/Input In-Reply-To: References: Message-ID: big +1, really look forward to the storyboard setup On Fri, Sep 21, 2018 at 6:31 AM Melvin Hillsman wrote: > Hey everyone, > > During the TC meeting at the PTG we discussed the ideal way to capture > user-centric feedback; particular from our various groups like SIGs, WGs, > etc. > > Options that were mentioned ranged from a wiki page to a standalone > solution like discourse. > > While there is no perfect solution it was determined that Storyboard could > facilitate this. It would play out where there is a project group > openstack-uc? and each of the SIGs, WGs, etc would have a project under > this group; if I am wrong someone else in the room correct me. > > The entire point is a first step (maybe final) in centralizing > user-centric feedback that does not require any extra overhead be it cost, > time, or otherwise. Just kicking off a discussion so others have a chance > to chime in before anyone pulls the plug or pushes the button on anything > and we settle as a community on what makes sense. > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From davanum at gmail.com Thu Sep 20 22:45:57 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Thu, 20 Sep 2018 18:45:57 -0400 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> Message-ID: +1 from me. On Thu, Sep 20, 2018 at 5:59 PM Melvin Hillsman wrote: > I agree all lists should be merged as discussed otherwise why not just > leave all things as they are :P > > On Thu, Sep 20, 2018 at 4:49 PM Emilien Macchi wrote: > >> >> >> On Thu, Sep 20, 2018 at 5:47 PM Doug Hellmann >> wrote: >> >>> Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +0000: >>> > tl;dr: The openstack, openstack-dev, openstack-sigs and >>> > openstack-operators mailing lists (to which this is being sent) will >>> > be replaced by a new openstack-discuss at lists.openstack.org mailing >>> > list. >>> >>> Since last week there was some discussion of including the openstack-tc >>> mailing list among these lists to eliminate confusion caused by the fact >>> that the list is not configured to accept messages from all subscribers >>> (it's meant to be used for us to make sure TC members see meeting >>> announcements). >>> >>> I'm inclined to include it and either use a direct mailing or the >>> [tc] tag on the new discuss list to reach TC members, but I would >>> like to hear feedback from TC members and other interested parties >>> before calling that decision made. Please let me know what you think. >>> >> >> +2 , easier to manage, easier to reach out. >> -- >> Emilien Macchi >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Sep 20 22:53:23 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 20 Sep 2018 17:53:23 -0500 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: Message-ID: <0cac451b-6519-f0de-acb7-0703560b1f4d@gmail.com> On 9/20/2018 12:08 PM, Elõd Illés wrote: > Hi Matt, > > About 1.: I think it is a good idea to cut a final release (especially > as some vendor/operator would be glad even if there would be some > release in Extended Maintenance, too, what most probably won't > happen...) -- saying that without knowing how much of a burden would it > be for projects to do this final release... > After that it sounds reasonably to tag the branches EM (as it is written > in the mentioned resolution). > > Do you have any plan about how to coordinate the 'final releases' and do > the EM-tagging? > > Thanks for raising these questions! > > Cheers, > > Előd For anyone following along and that cares about this (hopefully PTLs), Előd, Doug, Sean and I formulated a plan in IRC today [1]. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-stable/%23openstack-stable.2018-09-20.log.html#t2018-09-20T17:10:56 -- Thanks, Matt From fungi at yuggoth.org Thu Sep 20 22:54:11 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 20 Sep 2018 22:54:11 +0000 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...) In-Reply-To: <1537479809-sup-898@lrrr.local> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> Message-ID: <20180920225411.azzud6ria7yc23i3@yuggoth.org> On 2018-09-20 15:46:43 -0600 (-0600), Doug Hellmann wrote: [...] > Since last week there was some discussion of including the openstack-tc > mailing list among these lists to eliminate confusion caused by the fact > that the list is not configured to accept messages from all subscribers > (it's meant to be used for us to make sure TC members see meeting > announcements). [...] I think it makes sense. The Interop WG also indicated they'd like to do the same with theirs. In cases like these where the lists in question have much lower volume anyway, I don't think any special handling is needed. Basically any time after November 19 just send a post to the old list saying that subsequent messages need to go to openstack-discuss instead, and then immediately set it to no longer accept new messages. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From miguel at mlavalle.com Thu Sep 20 23:11:35 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 20 Sep 2018 18:11:35 -0500 Subject: [openstack-dev] Neutron drivers meeting on September 21st Message-ID: Hi, Since we just came back from the PTG in Denver and we reviewed / discussed many RFEs, let's skip the Drivers meeting tomorrow, Friday 21st. We will resume the meeting on the 28th Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Sep 21 00:19:08 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 20 Sep 2018 19:19:08 -0500 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...) In-Reply-To: <1537479809-sup-898@lrrr.local> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> Message-ID: <20180921001908.GA16789@sm-workstation> On Thu, Sep 20, 2018 at 03:46:43PM -0600, Doug Hellmann wrote: > Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +0000: > > tl;dr: The openstack, openstack-dev, openstack-sigs and > > openstack-operators mailing lists (to which this is being sent) will > > be replaced by a new openstack-discuss at lists.openstack.org mailing > > list. > > Since last week there was some discussion of including the openstack-tc > mailing list among these lists to eliminate confusion caused by the fact > that the list is not configured to accept messages from all subscribers > (it's meant to be used for us to make sure TC members see meeting > announcements). > > I'm inclined to include it and either use a direct mailing or the > [tc] tag on the new discuss list to reach TC members, but I would > like to hear feedback from TC members and other interested parties > before calling that decision made. Please let me know what you think. > > Doug > This makes sense to me. I would rather have any discussions where everyone is likely to see them than to continue with the current separation. From sean.mcginnis at gmx.com Fri Sep 21 00:21:53 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 20 Sep 2018 19:21:53 -0500 Subject: [openstack-dev] Capturing Feedback/Input In-Reply-To: References: Message-ID: <20180921002152.GB16789@sm-workstation> On Thu, Sep 20, 2018 at 05:30:32PM -0500, Melvin Hillsman wrote: > Hey everyone, > > During the TC meeting at the PTG we discussed the ideal way to capture > user-centric feedback; particular from our various groups like SIGs, WGs, > etc. > > Options that were mentioned ranged from a wiki page to a standalone > solution like discourse. > > While there is no perfect solution it was determined that Storyboard could > facilitate this. It would play out where there is a project group > openstack-uc? and each of the SIGs, WGs, etc would have a project under > this group; if I am wrong someone else in the room correct me. > > The entire point is a first step (maybe final) in centralizing user-centric > feedback that does not require any extra overhead be it cost, time, or > otherwise. Just kicking off a discussion so others have a chance to chime > in before anyone pulls the plug or pushes the button on anything and we > settle as a community on what makes sense. > > -- > Kind regards, > > Melvin Hillsman I think Storyboard would be a good place to manage SIG/WG feedback. It will take some time before the majority of projects have moved over from Launchpad, but once they do, this will make it much easier to track SIG initiatives all the way through to code implementation. From rico.lin.guanyu at gmail.com Fri Sep 21 01:20:54 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 21 Sep 2018 09:20:54 +0800 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> Message-ID: On Fri, Sep 21, 2018 at 5:59 AM Melvin Hillsman wrote: > > I agree all lists should be merged as discussed otherwise why not just leave all things as they are :P Yeah, if we merged all lists, it's much easier for all to know exactly where they can go for trigger discussions. I doubt all experienced developers and ops are subscribe all list that relative to them. >From this point, and combine with global reach out, we can easily give correct information for newcomers. I'm tired to put `[openstack-dev][Openstack-operators][Openstack-sigs]` in front of my mail title, that just pure madness -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Sep 21 01:21:42 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 20 Sep 2018 20:21:42 -0500 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...) In-Reply-To: <20180921001908.GA16789@sm-workstation> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> <20180921001908.GA16789@sm-workstation> Message-ID: On Thu, Sep 20, 2018 at 7:19 PM Sean McGinnis wrote: > On Thu, Sep 20, 2018 at 03:46:43PM -0600, Doug Hellmann wrote: > > Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +0000: > > > tl;dr: The openstack, openstack-dev, openstack-sigs and > > > openstack-operators mailing lists (to which this is being sent) will > > > be replaced by a new openstack-discuss at lists.openstack.org mailing > > > list. > > > > Since last week there was some discussion of including the openstack-tc > > mailing list among these lists to eliminate confusion caused by the fact > > that the list is not configured to accept messages from all subscribers > > (it's meant to be used for us to make sure TC members see meeting > > announcements). > > > > I'm inclined to include it and either use a direct mailing or the > > [tc] tag on the new discuss list to reach TC members, but I would > > like to hear feedback from TC members and other interested parties > > before calling that decision made. Please let me know what you think. > > > > Doug > > > > This makes sense to me. I would rather have any discussions where everyone > is > likely to see them than to continue with the current separation. > +1 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwamoto at valinux.co.jp Fri Sep 21 01:36:56 2018 From: iwamoto at valinux.co.jp (IWAMOTO Toshihiro) Date: Fri, 21 Sep 2018 10:36:56 +0900 Subject: [openstack-dev] [neutron] heads up to long time ovs users... Message-ID: <20180921013656.31737B3BA8@mail.valinux.co.jp> The neutron team is finally removing the ovs-ofctl option. https://review.openstack.org/#/c/599496/ The ovs-ofctl of_interface option wasn't default since Newton and was deprecated in Pike. So, if you are a long time ovs-agent user and upgrading to a new coming release, you must switch from the ovs-ofctl implementation to the native implementation and are affected by the following issue. https://bugs.launchpad.net/neutron/+bug/1793354 The loss of communication mentioned in this bug report would be a few seconds to a few minutes depending on the number of network interfaces. It happens when an ovs-agent is restarted with the new of_interface (so only once during the upgrade) and persists until the network interfaces are set up. Please speak up if you cannot tolerate this during upgrades. IIUC, this bug is unfixable and I'd like to move forward as maintaining two of_interface implementation is a burden for the neutron team. -- IWAMOTO Toshihiro From zhipengh512 at gmail.com Fri Sep 21 02:04:35 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 21 Sep 2018 10:04:35 +0800 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> <20180921001908.GA16789@sm-workstation> Message-ID: +1 and thanks for the effort to make this happen On Fri, Sep 21, 2018, 9:22 AM Lance Bragstad wrote: > > > On Thu, Sep 20, 2018 at 7:19 PM Sean McGinnis > wrote: > >> On Thu, Sep 20, 2018 at 03:46:43PM -0600, Doug Hellmann wrote: >> > Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +0000: >> > > tl;dr: The openstack, openstack-dev, openstack-sigs and >> > > openstack-operators mailing lists (to which this is being sent) will >> > > be replaced by a new openstack-discuss at lists.openstack.org mailing >> > > list. >> > >> > Since last week there was some discussion of including the openstack-tc >> > mailing list among these lists to eliminate confusion caused by the fact >> > that the list is not configured to accept messages from all subscribers >> > (it's meant to be used for us to make sure TC members see meeting >> > announcements). >> > >> > I'm inclined to include it and either use a direct mailing or the >> > [tc] tag on the new discuss list to reach TC members, but I would >> > like to hear feedback from TC members and other interested parties >> > before calling that decision made. Please let me know what you think. >> > >> > Doug >> > >> >> This makes sense to me. I would rather have any discussions where >> everyone is >> likely to see them than to continue with the current separation. >> > > +1 > > >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Fri Sep 21 02:28:43 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 21 Sep 2018 11:28:43 +0900 Subject: [openstack-dev] [StoryBoard] PTG Summary In-Reply-To: References: Message-ID: Hi Kendall, I couldn't attend the PTG but those are exactly what I love to have in Storyboard, especially "Project Group Descriptions" and "Story Attachments". Thanks for the effort! *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Fri, Sep 21, 2018 at 6:15 AM Kendall Nelson wrote: > Hello Lovers of Task Tracking! > > So! We talked about a lot of things, and I went to a lot of rooms to talk > about StoryBoard related things and it was already a week ago so bear with > me. > > We had a lot of good discussions as we were able to include SotK in > discussions via videocalling. We also had the privilege of our outreachy > intern to come all the way from Cairo to Denver to join us :) > > Onto the summaries! > > Story Attachments > > ============== > > This topic has started coming up with increasing regularity. Currently, > StoryBoard doesn’t support attachments, but it’s a feature that several > projects claim to be blocking their migration. The current work around is > either to trim down logs and paste the relevant section, or to host the > file elsewhere and link to its location. After consulting with the > infrastructure team, we concluded that currently, there is no donated > storage. The next step is for me to draft a spec detailing our requirements > and implementation details and then to include infra on the review to help > them have something concrete to go to vendors with. For notes on the > proposed method see the etherpad[1]. > > One other thing discussed during this topic was how we could maybe migrate > the current attachments. This isn’t supported by the migration script at > this point, but it’s something we could write a separate script for. It > should be separate because it would be a painfully slow process and we > wouldn’t want to slow down the migration script more than it already is by > the Launchpad API. The attachments script would be run after the initial > migration; that being said, everything still persists in Launchpad so > things can still be referenced there. > > Handling Duplicate Stories > > ==================== > > This is also an ongoing topic for discussion. Duplicate stories if not > handled properly could dilute the database as we get more projects migrated > over. The plan we settled on is to add a ‘Mark as Duplicate’ button to the > webclient and corresponding functions to the API. The user would be > prompted for a link to the master story. The master story would get a new > timeline event that would have the link to the duplicate and the duplicate > story would have all tasks auto marked as invalid (aside from those marked > as merged) so that the story then shows as inactive. The duplicate story > could also get a timeline event that explains what happened and links to > the master story. I’ve yet to create a story for all of this, but it’s on > my todo list. > > Handling Thousands of Comments Per Story > > ================================== > > There’s this special flower story[2] that has literally thousands of > comments on it because of all of the gerrit comments being added to the > timeline for all the patches for all the tasks. Rendering of the timeline > portion of the webpage in the webclient is virtually impossible. It will > load the tasks and then hang forever. The discussion around this boiled > down to this: other task trackers also can’t handle this and there is a > better way to divvy up the story into several stories and contain them in a > worklist for future, similar work. For now, users can select what they want > to load in their timeline views for stories, so by unmarking all of the > timeline events in their preferences, the story will load completely sans > timeline details. Another solution we discussed to help alleviate the > timeline load on stories with lots of tasks is to have a task field that > links to the review, rather than a comment from gerrit every time a new > patch gets pushed. Essentially we want to focus on cleaning up the timeline > rather than just going straight to a pagination type of solution. It was > also concluded that we want to add another user preference for page sizes > of 1000. Tasks have not been created in the story related to this issue > yet[3], but its on my todo list. > > Project Group Descriptions > > ===================== > > There was a request to have project group descriptions, but currently > there is nothing in the API handling this. Discussion concluded with > agreement that this shouldn’t be too difficult. All that needs to happen is > a few additions to the API and the connection to managing group definitions > in project-config. I still need to make a story for this. > > Translating storyboard-webclient > > ========================= > > There was an infrastructure mailing list thread a little while back that > kicked off discussion on this topic. It was received as an interesting idea > and could help with the adoption of StoryBoard outside of OpenStack. The > biggest concern was communicating to users that are seeing the webclient > rendered in some other language that they still need to create > tasks/stories/worklists/boards in English or whatever the default language > is for the organization that is hosting StoryBoard. This could be a banner > when someone logs in, or something on user’s dashboards. One of the things > that needs to happen first is to find libraries for javascript and angular > for signaling what strings need to be translated. We didn’t really outline > next steps past that as it’s not super high priority, but it’s definitely > an effort we would support if someone wanted to start driving it forward. > > Easier Rollback for Webclient Continuous Deployment > > ========================================= > > With the puppet-storyboard module we deploy from tarballs instead of from > git right now, and we don't preserve earlier tarballs which makes it > difficult to rollback changes when we find issues. There wasn’t a ton of > discussion besides, yes we need to figure this out. Pre-zuulv3 we uploaded > tarballs with the git sha, if we apply that to > publish-openstack-javascript-content, that might help the situation. > > Managing Project Coresec Groups > > ========================== > > The vast majority of work on private stories has been implemented. Stories > can be marked as private and users can subscribe other users to those > private stories so that only those people can see them. The only > convenience that is currently lacking is adding groups of users (manually > or automatically if in a template story). Groups of users are currently > only managed by StoryBoard admins. We would like to make this managed in a > repository or by proxying gerrit group management. This shouldn’t be too > complicated a change, it would only require some sort of flag being set for > a group definition and then some database migration to sync those groups > into the StoryBoard database. If you have opinions on this topic, it’s not > all set in stone and we would love to hear your thoughts! > > > Searching > > ======== > > It’s become apparent that while the search and type ahead features of > StoryBoard work better than most users think at first glance, it’s an issue > that users struggle with searching as much as they do. We talked about > possible solutions for this aside from writing documentation to cover > searching in the webclient. The solution we talked about most was that it > might be easier for our users if we used the gerrit query language as that > is what the majority of our users are already familiar with. The next step > here is to write a spec for using the gerrit query language- or some other > language if users disagree about using the gerrit language. > > > Show all OpenStack Repos in StoryBoard? > > ================================ > > Are we getting to the point where it would be helpful for the users of > StoryBoard to be able to add tasks to stories for all the repos not already > migrated to StoryBoard? This would be incredibly helpful for things like > release goal tracking where many repos that haven’t been migrated had tasks > that were assigned to governance instead of the actual repo so as to be > able to track everything in a single story. This is something we will want > to take up with the TC during a set of office hours in the next week or so. > > > Summary & Continuing Conversations > > ============================= > > My brain is mush. Hopefully I covered the majority of the important topics > and did them justice! Anyone that was there, please feel free to correct > me. Anyone that wasn’t there that is interested in getting involved with > any of this, please join us in #storyboard on IRC or email us with the > [Storyboard] tag to the dev or infra mailing lists. We also have weekly > meetings[4] on Wednesdays at 19:00 UTC, please join us! > > > I've got a lot of stories to make/update and tasks to add.. > > > Thanks! > > -Kendall Nelson (diablo_rojo) > > [1] https://etherpad.openstack.org/p/sb-stein-ptg-planning > > [2] https://storyboard.openstack.org/#!/story/2002586 > > [3] https://storyboard.openstack.org/#!/story/2003525 > > [4] http://eavesdrop.openstack.org/#StoryBoard_Meeting > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Fri Sep 21 03:49:04 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Thu, 20 Sep 2018 20:49:04 -0700 Subject: [openstack-dev] [congress] proposed new meeting time Message-ID: Hi all, Following discussions in IRC meetings, here is a proposed new meeting time for Congress project: On even weeks, Friday UTC 4AM (from the current UTC 2:30AM) The new time would make it easier for India while still good for Asia Pacific. The time continues to be bad for Europe and Eastern Americas. We can add another meeting time in the off week if there is interest. Please respond if you have any additional comments! Eric Kao From dangtrinhnt at gmail.com Fri Sep 21 04:37:47 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 21 Sep 2018 13:37:47 +0900 Subject: [openstack-dev] [Searchlight] vPTG Summary Message-ID: Hi team, We had had a great vPTG yesterday. Here is the summary [1]. Thank Kevin_Zheng and thuydang for joining us. Bests, [1] https://www.dangtrinh.com/2018/09/searchlight-vptg-summary.html *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Sep 21 05:04:37 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 21 Sep 2018 14:04:37 +0900 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...) In-Reply-To: <1537479809-sup-898@lrrr.local> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> Message-ID: <165fa83f855.e2df243324249.1484308163727916304@ghanshyammann.com> ---- On Fri, 21 Sep 2018 06:46:43 +0900 Doug Hellmann wrote ---- > Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +0000: > > tl;dr: The openstack, openstack-dev, openstack-sigs and > > openstack-operators mailing lists (to which this is being sent) will > > be replaced by a new openstack-discuss at lists.openstack.org mailing > > list. > > Since last week there was some discussion of including the openstack-tc > mailing list among these lists to eliminate confusion caused by the fact > that the list is not configured to accept messages from all subscribers > (it's meant to be used for us to make sure TC members see meeting > announcements). > > I'm inclined to include it and either use a direct mailing or the > [tc] tag on the new discuss list to reach TC members, but I would > like to hear feedback from TC members and other interested parties > before calling that decision made. Please let me know what you think. +1 on including the openstack-tc also. That will help to get more attentions on TC discussions from other group also. -gmann > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From samuel at cassi.ba Fri Sep 21 05:14:41 2018 From: samuel at cassi.ba (Samuel Cassiba) Date: Thu, 20 Sep 2018 22:14:41 -0700 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...) In-Reply-To: <1537479809-sup-898@lrrr.local> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> Message-ID: On Thu, Sep 20, 2018 at 2:48 PM Doug Hellmann wrote: > > Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +0000: > > tl;dr: The openstack, openstack-dev, openstack-sigs and > > openstack-operators mailing lists (to which this is being sent) will > > be replaced by a new openstack-discuss at lists.openstack.org mailing > > list. > > Since last week there was some discussion of including the openstack-tc > mailing list among these lists to eliminate confusion caused by the fact > that the list is not configured to accept messages from all subscribers > (it's meant to be used for us to make sure TC members see meeting > announcements). > > I'm inclined to include it and either use a direct mailing or the > [tc] tag on the new discuss list to reach TC members, but I would > like to hear feedback from TC members and other interested parties > before calling that decision made. Please let me know what you think. > > Doug > +1 including the TC list as a tag makes sense to me and my tangent about intent in online communities. From jazeltq at gmail.com Fri Sep 21 06:14:12 2018 From: jazeltq at gmail.com (Jaze Lee) Date: Fri, 21 Sep 2018 14:14:12 +0800 Subject: [openstack-dev] [release-schedule] about release Series Message-ID: Hello, Is the content in https://releases.openstack.org/ is the latest? There are some question here, why Queens cycles is longer then Rocy cycles. The Rocy cycles is too shot of live, even less than six month? May be the "estimated 2019-02-23" is a slip of pen? Should it be "2020-02-23" ? Can someone tell some reason about this? Thanks a lot. -- 谦谦君子 From mdulko at redhat.com Fri Sep 21 06:54:25 2018 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Fri, 21 Sep 2018 08:54:25 +0200 Subject: [openstack-dev] =?iso-8859-1?q?Nominating_Luis_Tom=E1s_Bol=EDvar_?= =?iso-8859-1?q?for_kuryr-kubernetes_core?= In-Reply-To: <6a804d75-90f9-c561-5891-7b89f8fa9cb2@redhat.com> References: <6a804d75-90f9-c561-5891-7b89f8fa9cb2@redhat.com> Message-ID: <38ce800281ae6544057e2f4e1cc273291f783da6.camel@redhat.com> On Thu, 2018-09-20 at 18:33 +0200, Daniel Mellado wrote: > Hi All, > > Id like to nominate Luis Tomás for Kuryr-Kubernetes core. > > He has been contributing to the project development with both features > and quality reviews at core reviewer level, as well as being the stable > branch liaison keeping on eye on every needed backport and bug and > fighting and debugging lbaas issues. > > Please follow up with a +1/-1 to express your support, even if he makes > the worst jokes ever! Looks like Luis is doing most of the review work recently [1], so it's definitely a confident +1 from me. [1] http://stackalytics.com/report/contribution/kuryr-group/90 > Thanks! > > Daniel From rico.lin.guanyu at gmail.com Fri Sep 21 07:01:39 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 21 Sep 2018 15:01:39 +0800 Subject: [openstack-dev] [Openstack-sigs] [First Contact] SIG PTG Summary In-Reply-To: References: Message-ID: On Thu, Sep 20, 2018 at 1:23 PM Kendall Nelson wrote: Organization Guide > > =============== > > Back in Sydney, we started discussing creating a guide for organizations > to educate them on what their contributors need to be effective and > successful members of the community. There is a patch out for review right > now[7] that we want to get merged as soon as possible so that we can > publicize it in Berlin and start introducing it when new companies join the > foundation. It was concluded that we needed to add more rationalizations to > the requirements and we delegated those out to ricolin, jungleboyj, and > spotz to help mattoliverau with content. As soon as this patch gets merged, > I volunteered to work to get it onto the soonest board meeting possible. > Dear all, As an action, I just post some suggest changes for `Technical` section (with comments) in [7], please take a look. > > [7] https://review.openstack.org/#/c/578676 > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Sep 21 07:10:15 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 21 Sep 2018 16:10:15 +0900 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: Message-ID: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt wrote ---- > tl;dr+1 consistent names > I would make the names mirror the API... because the Operator setting them knows the API, not the codeIgnore the crazy names in Nova, I certainly hate them Big +1 on consistent naming which will help operator as well as developer to maintain those. > > Lance Bragstad wrote: > > I'm curious if anyone has context on the "os-" part of the format? > > My memory of the Nova policy mess...* Nova's policy rules traditionally followed the patterns of the code > ** Yes, horrible, but it happened.* The code used to have the OpenStack API and the EC2 API, hence the "os"* API used to expand with extensions, so the policy name is often based on extensions** note most of the extension code has now gone, including lots of related policies* Policy in code was focused on getting us to a place where we could rename policy** Whoop whoop by the way, it feels like we are really close to something sensible now! > Lance Bragstad wrote: > Thoughts on using create, list, update, and delete as opposed to post, get, put, patch, and delete in the naming convention? > I could go either way as I think about "list servers" in the API.But my preference is for the URL stub and POST, GET, etc. > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad wrote:If we consider dropping "os", should we entertain dropping "api", too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple service types (e.g "compute" or "loadbalancer"). > +1The API is known as "compute" in api-ref, so the policy should be for "compute", etc. Agree on mapping the policy name with api-ref as much as possible. Other than policy name having 'os-', we have 'os-' in resource name also in nova API url like /os-agents, /os-aggregates etc (almost every resource except servers , flavors). As we cannot get rid of those from API url, we need to keep the same in policy naming too? or we can have policy name like compute:agents:create/post but that mismatch from api-ref where agents resource url is os-agents. Also we have action API (i know from nova not sure from other services) like POST /servers/{server_id}/action {addSecurityGroup} and their current policy name is all inconsistent. few have policy name including their resource name like "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in policy name like "os_compute_api:os-admin-actions:reset_state" and few has direct action name like "os_compute_api:os-console-output" May be we can make them consistent with :: or any better opinion. > From: Lance Bragstad > The topic of having consistent policy names has popped up a few times this week. > > I would love to have this nailed down before we go through all the policy rules again. In my head I hope in Nova we can go through each policy rule and do the following: > * move to new consistent policy name, deprecate existing name* hardcode scope check to project, system or user** (user, yes... keypairs, yuck, but its how they work)** deprecate in rule scope checks, which are largely bogus in Nova anyway* make read/write/admin distinction** therefore adding the "noop" role, amount other things + policy granularity. It is good idea to make the policy improvement all together and for all rules as you mentioned. But my worries is how much load it will be on operator side to migrate all policy rules at same time? What will be the deprecation period etc which i think we can discuss on proposed spec - https://review.openstack.org/#/c/547850 -gmann > Thanks,John __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mrunge at redhat.com Fri Sep 21 07:43:52 2018 From: mrunge at redhat.com (Matthias Runge) Date: Fri, 21 Sep 2018 09:43:52 +0200 Subject: [openstack-dev] [release][collectd-openstack-plugins] Schedule new release? Message-ID: <81cc0f6a-67e0-386c-374a-bb671f8aa6a9@redhat.com> Hello, it has been some time, since collectd-ceilometer-plugin[1] has been renamed to collectd-openstack-plugins[2] and since there has been a release. What is required to trigger a new release here? Thank you, Matthias [1] https://github.com/openstack/collectd-ceilometer-plugin [2] https://github.com/openstack/collectd-openstack-plugins -- Matthias Runge Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander From thierry at openstack.org Fri Sep 21 09:15:04 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 21 Sep 2018 11:15:04 +0200 Subject: [openstack-dev] [release-schedule] about release Series In-Reply-To: References: Message-ID: Jaze Lee wrote: > Hello, > Is the content in https://releases.openstack.org/ is the latest? > There are some question here, why Queens cycles is longer then Rocy cycles. > The Rocy cycles is too shot of live, even less than six month? May > be the "estimated 2019-02-23" is a slip of pen? Should it be > "2020-02-23" ? > Can someone tell some reason about this? Thanks a lot. It probably is a slip of pen. Thanks for noticing ! I proposed the fix: https://review.openstack.org/604309 -- Thierry Carrez (ttx) From thierry at openstack.org Fri Sep 21 09:23:59 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 21 Sep 2018 11:23:59 +0200 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! In-Reply-To: <1537479809-sup-898@lrrr.local> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> Message-ID: Doug Hellmann wrote: > I'm inclined to include it and either use a direct mailing or the > [tc] tag on the new discuss list to reach TC members, but I would > like to hear feedback from TC members and other interested parties > before calling that decision made. Please let me know what you think. +1 -- the separate list has outlived its usefulness. -- Thierry Carrez (ttx) From derekh at redhat.com Fri Sep 21 10:16:09 2018 From: derekh at redhat.com (Derek Higgins) Date: Fri, 21 Sep 2018 11:16:09 +0100 Subject: [openstack-dev] [ironic] status of the zuulv3 job migration Message-ID: Just quick summary of the status and looking for some input about the experimental jobs 15 jobs are now done, with another 2 ready for reviewing. This leaves 6 jobs 1 x multinode job I've yet to finished porting this one 2 x grenade jobs Last time I looked grenade jobs couldn't yet be ported to zuulv3 native but I'll investigate further 3 x experimental jobs (ironic-dsvm-functional, ironic-tempest-dsvm-parallel, ironic-tempest-dsvm-pxe_ipa-full) These don't currently pass and it doesn't look like anybody is using them, So I'd like to know if there is anybody out there interested in them, if not I'll go ahead and remove them. thanks, Derek. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Fri Sep 21 10:34:30 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 21 Sep 2018 06:34:30 -0400 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> Message-ID: In my mind, there is value to having a mailing list for things like important meetings, but it seems more reasonable to just address such items as needed. At the same time, I agree with ttx, the separate list is no longer useful. +1 On Fri, Sep 21, 2018 at 5:24 AM Thierry Carrez wrote: > > Doug Hellmann wrote: > > I'm inclined to include it and either use a direct mailing or the > > [tc] tag on the new discuss list to reach TC members, but I would > > like to hear feedback from TC members and other interested parties > > before calling that decision made. Please let me know what you think. > > +1 -- the separate list has outlived its usefulness. > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Fri Sep 21 10:53:08 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 21 Sep 2018 05:53:08 -0500 Subject: [openstack-dev] [release][collectd-openstack-plugins] Schedule new release? In-Reply-To: <81cc0f6a-67e0-386c-374a-bb671f8aa6a9@redhat.com> References: <81cc0f6a-67e0-386c-374a-bb671f8aa6a9@redhat.com> Message-ID: <20180921105307.GA2744@sm-workstation> On Fri, Sep 21, 2018 at 09:43:52AM +0200, Matthias Runge wrote: > Hello, > > it has been some time, since collectd-ceilometer-plugin[1] > has been renamed to collectd-openstack-plugins[2] and since there has been a > release. > > What is required to trigger a new release here? > > Thank you, > Matthias > > > [1] https://github.com/openstack/collectd-ceilometer-plugin > [2] https://github.com/openstack/collectd-openstack-plugins > -- > Matthias Runge > Hi Matthias, collectd-openstack-plugins does not appear to be an official repo under governance [1]. For these types of projects to do a release, the team would need to push a tag to the repo. That will trigger some post jobs to run that will create and publish tarballs. Some basic information (though slightly different context) can be found here [2]. [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml [2] https://docs.openstack.org/infra/manual/creators.html#prepare-an-initial-release -- Sean McGinnis (smcginnis) From sombrafam at gmail.com Fri Sep 21 11:09:21 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Fri, 21 Sep 2018 08:09:21 -0300 Subject: [openstack-dev] [ptg][cinder] Stein PTG Summary Page Ready ... In-Reply-To: <20180919095249.mfutrq74ynkdwgvh@localhost> References: <78636766-8656-5110-2526-3cb5a361e06c@gmail.com> <20180919095249.mfutrq74ynkdwgvh@localhost> Message-ID: Thanks Jay! Em qua, 19 de set de 2018 às 06:53, Gorka Eguileor escreveu: > On 18/09, Jay S Bryant wrote: > > Team, > > > > I have put together the following page with a summary of all our > discussions > > at the PTG: https://wiki.openstack.org/wiki/CinderSteinPTGSummary > > > > Please review the contents and let me know if anything needs to be > changed. > > > > Jay > > > > > > Hi Jay, > > Thank you for the great summary, it looks great. > > After reading it, I can't think of anything that's missing. > > Cheers, > Gorka. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Sep 21 11:24:58 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 21 Sep 2018 11:24:58 +0000 Subject: [openstack-dev] [release][collectd-openstack-plugins] Schedule new release? In-Reply-To: <20180921105307.GA2744@sm-workstation> References: <81cc0f6a-67e0-386c-374a-bb671f8aa6a9@redhat.com> <20180921105307.GA2744@sm-workstation> Message-ID: <20180921112458.canplx2fybfrfj3u@yuggoth.org> On 2018-09-21 05:53:08 -0500 (-0500), Sean McGinnis wrote: [...] > collectd-openstack-plugins does not appear to be an official repo under > governance [1]. For these types of projects to do a release, the team would > need to push a tag to the repo. That will trigger some post jobs to run that > will create and publish tarballs. Some basic information (though slightly > different context) can be found here [2]. > > [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml > [2] https://docs.openstack.org/infra/manual/creators.html#prepare-an-initial-release Perhaps slightly more aligned with the context of the question is this document: https://docs.openstack.org/infra/manual/drivers.html#tagging-a-release -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From bence.romsics at gmail.com Fri Sep 21 12:14:32 2018 From: bence.romsics at gmail.com (Bence Romsics) Date: Fri, 21 Sep 2018 14:14:32 +0200 Subject: [openstack-dev] [neutron][nova] Small bandwidth demo on the PTG In-Reply-To: <1536806866.7148.0@smtp.office365.com> References: <1535619300.3600.5@smtp.office365.com> <4bb21c51-0092-70f3-a535-8fa59adae7ae@gmail.com> <1535704165.17206.0@smtp.office365.com> <1536806866.7148.0@smtp.office365.com> Message-ID: Hi, At the demo it turned out that people were interested in a written version of the demo, so here you can find a blog post about the current state of the guaranteed minimum bandwidth feature: https://rubasov.github.io/2018/09/21/openstack-qos-min-bw-demo.html Cheers, Bence On Thu, Sep 13, 2018 at 4:48 AM Balázs Gibizer wrote: > > Hi, > > It seems that the Nova room (Ballroom A) does not have any projection > possibilities. In the other hand the Neutron room ( > Vail Meeting Room, Atrium Level) does have a projector. So I suggest to > move the demo to the Neutron room. > > Cheers, > gibi > > On Fri, Aug 31, 2018 at 2:29 AM, Balázs Gibizer > wrote: > > > > > > On Thu, Aug 30, 2018 at 8:13 PM, melanie witt > > wrote: > >> On Thu, 30 Aug 2018 12:43:06 -0500, Miguel Lavalle wrote: > >>> Gibi, Bence, > >>> > >>> In fact, I added the demo explicitly to the Neutron PTG agenda from > >>> 1:30 to 2, to give it visiblilty > >> > >> I'm interested in seeing the demo too. Will the demo be shown at the > >> Neutron room or the Nova room? Historically, lunch has ended at > >> 1:30, so this will be during the same time as the Neutron/Nova > >> cross project time. Should we just co-locate together for the demo > >> and the session? I expect anyone watching the demo will want to > >> participate in the Neutron/Nova session as well. Either room is > >> fine by me. > >> > > > > I assume that the nova - neturon cross project session will be in the > > nova room, so I propose to have the demo there as well to avoid > > unnecessarily moving people around. For us it is totally OK to start > > the demo at 1:30. > > > > Cheers, > > gibi > > > > > >> -melanie > >> > >>> On Thu, Aug 30, 2018 at 3:55 AM, Balázs Gibizer > >>> >>> > wrote: > >>> > >>> Hi, > >>> > >>> Based on the Nova PTG planning etherpad [1] there is a need to > >>> talk > >>> about the current state of the bandwidth work [2][3]. Bence > >>> (rubasov) has already planned to show a small demo to Neutron > >>> folks > >>> about the current state of the implementation. So Bence and I > >>> are > >>> wondering about bringing that demo close to the nova - neutron > >>> cross > >>> project session. That session is currently planned to happen > >>> Thursday after lunch. So we are think about showing the demo > >>> right > >>> before that session starts. It would start 30 minutes before the > >>> nova - neutron cross project session. > >>> > >>> Are Nova folks also interested in seeing such a demo? > >>> > >>> If you are interested in seeing the demo please drop us a line > >>> or > >>> ping us in IRC so we know who should we wait for. > >>> > >>> Cheers, > >>> gibi > >>> > >>> [1] https://etherpad.openstack.org/p/nova-ptg-stein > >>> > >>> [2] > >>> > >>> https://specs.openstack.org/openstack/neutron-specs/specs/rocky/minimum-bandwidth-allocation-placement-api.html > >>> > >>> > >>> [3] > >>> > >>> https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/bandwidth-resource-provider.html > >>> > >>> > >>> > >>> > >>> > >>> __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> > >>> > >>> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> > >>> > >>> > >>> > >>> > >>> __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > >> > >> > >> > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From nguyentrihai93 at gmail.com Fri Sep 21 12:36:30 2018 From: nguyentrihai93 at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gVHLDrSBI4bqjaQ==?=) Date: Fri, 21 Sep 2018 21:36:30 +0900 Subject: [openstack-dev] [goals][python3][karbor] Please process the python3-first patches Message-ID: Hi Karbor team and Karbor PTL, As part of the "Run under Python 3 by default" community goal [1] for OpenStack in the Stein cycle, I proposed the patches related to python3-first goal very long time ago. However, there is no activity for those patches. Please receive those patches and review them. Those patches belong to: - openstack/karbor - openstack/karbor-dashboard - openstack/python-karborclient Here they are: https://review.openstack.org/#/q/project:%255E.*karbor.*+topic:python3-first+status:open [1] https://governance.openstack.org/tc/goals/stein/python3-first.html -- Nguyen Tri Hai / Ph.D. Student ANDA Lab., Soongsil Univ., Seoul, South Korea *[image: http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4] * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiaopengju at cmss.chinamobile.com Fri Sep 21 13:32:58 2018 From: jiaopengju at cmss.chinamobile.com (=?utf-8?B?amlhb3BlbmdqdQ==?=) Date: Fri, 21 Sep 2018 21:32:58 +0800 Subject: [openstack-dev] =?utf-8?b?5Zue5aSN77yaW2dvYWxzXVtweXRob24zXVtr?= =?utf-8?q?arbor=5D_Please_process_thepython3-first_patches?= Message-ID: Thanks for pushing these patches, we will review and merge them ASAP. 原始邮件 发件人:Nguyễn Trí Hảinguyentrihai93 at gmail.com 收件人:OpenStack Development Mailing List (not for usage questions)openstack-dev at lists.openstack.org 抄送:jiaopengjujiaopengju at cmss.chinamobile.com 发送时间:2018年9月21日(周五) 20:36 主题:[openstack-dev][goals][python3][karbor] Please process thepython3-first patches HiKarbor team and Karbor PTL, As part of the "Run under Python 3 by default" community goal [1] for OpenStack in the Stein cycle, I proposed the patches related to python3-first goal very long time ago. However, there is no activity for those patches. Please receive those patches and review them. Those patches belong to: - openstack/karbor - openstack/karbor-dashboard - openstack/python-karborclient Here they are:https://review.openstack.org/#/q/project:%255E.*karbor.*+topic:python3-first+status:open [1]https://governance.openstack.org/tc/goals/stein/python3-first.html -- Nguyen Tri Hai/ Ph.D. Student ANDA Lab., Soongsil Univ., Seoul, South Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Sep 21 13:46:20 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 21 Sep 2018 07:46:20 -0600 Subject: [openstack-dev] [all][tc] please vote Message-ID: <1537532578-sup-1996@lrrr.local> A few hours ago Emmet reported that we have around 20% participation in the Technical Committee election, so far. I think we can do better. Our community is founded on the idea that the contributors should decide how the project is run. PTL and TC elections are the most explicit example of this principle in action. The people elected to the TC this term will be representing your interests to the Board as the Foundation continues to evolve and expand its scope with new focus areas and projects. They also will continue to work on building and sustaining the community, choosing and organizing series goals, and making decisions that have long term effects on the project. Please take a few minutes to look at Kendall's instructions[1], find your ballot email, and think about which candidates you want to have on the committee. Then cast your vote, today. Doug [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134820.html From jimmy at tipit.net Fri Sep 21 13:59:28 2018 From: jimmy at tipit.net (Jimmy Mcarthur) Date: Fri, 21 Sep 2018 08:59:28 -0500 Subject: [openstack-dev] OpenStack Project Navigator Message-ID: <5BA4F940.7050608@tipit.net> Following up on my (very brief) talk from the PTG, you can now propose changes to the Project Navigator by going to https://git.openstack.org/cgit/openstack/openstack-map repository Once your patch is merged, the page should reflect the changes straight away. Cheers, Jimmy From elod.illes at ericsson.com Fri Sep 21 14:08:28 2018 From: elod.illes at ericsson.com (=?UTF-8?B?RWzDtWQgSWxsw6lz?=) Date: Fri, 21 Sep 2018 16:08:28 +0200 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <0cac451b-6519-f0de-acb7-0703560b1f4d@gmail.com> References: <0cac451b-6519-f0de-acb7-0703560b1f4d@gmail.com> Message-ID: Hi, Here is an etherpad with the teams that have stable:follow-policy tag on their repos: https://etherpad.openstack.org/p/ocata-final-release-before-em On the links you can find reports about the open and unreleased changes, that could be a useful input for the before-EM/final release. Please have a look at the report (and review the open patches if there are) so that a release can be made if necessary. Thanks, Előd On 2018-09-21 00:53, Matt Riedemann wrote: > On 9/20/2018 12:08 PM, Elõd Illés wrote: >> Hi Matt, >> >> About 1.: I think it is a good idea to cut a final release >> (especially as some vendor/operator would be glad even if there would >> be some release in Extended Maintenance, too, what most probably >> won't happen...) -- saying that without knowing how much of a burden >> would it be for projects to do this final release... >> After that it sounds reasonably to tag the branches EM (as it is >> written in the mentioned resolution). >> >> Do you have any plan about how to coordinate the 'final releases' and >> do the EM-tagging? >> >> Thanks for raising these questions! >> >> Cheers, >> >> Előd > > For anyone following along and that cares about this (hopefully PTLs), > Előd, Doug, Sean and I formulated a plan in IRC today [1]. > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-stable/%23openstack-stable.2018-09-20.log.html#t2018-09-20T17:10:56 > From lbragstad at gmail.com Fri Sep 21 14:13:02 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 21 Sep 2018 09:13:02 -0500 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann wrote: > ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt < > john at johngarbutt.com> wrote ---- > > tl;dr+1 consistent names > > I would make the names mirror the API... because the Operator setting > them knows the API, not the codeIgnore the crazy names in Nova, I certainly > hate them > > Big +1 on consistent naming which will help operator as well as developer > to maintain those. > > > > > Lance Bragstad wrote: > > > I'm curious if anyone has context on the "os-" part of the format? > > > > My memory of the Nova policy mess...* Nova's policy rules traditionally > followed the patterns of the code > > ** Yes, horrible, but it happened.* The code used to have the OpenStack > API and the EC2 API, hence the "os"* API used to expand with extensions, so > the policy name is often based on extensions** note most of the extension > code has now gone, including lots of related policies* Policy in code was > focused on getting us to a place where we could rename policy** Whoop whoop > by the way, it feels like we are really close to something sensible now! > > Lance Bragstad wrote: > > Thoughts on using create, list, update, and delete as opposed to post, > get, put, patch, and delete in the naming convention? > > I could go either way as I think about "list servers" in the API.But my > preference is for the URL stub and POST, GET, etc. > > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad > wrote:If we consider dropping "os", should we entertain dropping "api", > too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple > service types (e.g "compute" or "loadbalancer"). > > +1The API is known as "compute" in api-ref, so the policy should be for > "compute", etc. > > Agree on mapping the policy name with api-ref as much as possible. Other > than policy name having 'os-', we have 'os-' in resource name also in nova > API url like /os-agents, /os-aggregates etc (almost every resource except > servers , flavors). As we cannot get rid of those from API url, we need to > keep the same in policy naming too? or we can have policy name like > compute:agents:create/post but that mismatch from api-ref where agents > resource url is os-agents. > Good question. I think this depends on how the service does policy enforcement. I know we did something like this in keystone, which required policy names and method names to be the same: "identity:list_users": "..." Because the initial implementation of policy enforcement used a decorator like this: from keystone import controller @controller.protected def list_users(self): ... Having the policy name the same as the method name made it easier for the decorator implementation to resolve the policy needed to protect the API because it just looked at the name of the wrapped method. The advantage was that it was easy to implement new APIs because you only needed to add a policy, implement the method, and make sure you decorate the implementation. While this worked, we are moving away from it entirely. The decorator implementation was ridiculously complicated. Only a handful of keystone developers understood it. With the addition of system-scope, it would have only become more convoluted. It also enables a much more copy-paste pattern (e.g., so long as I wrap my method with this decorator implementation, things should work right?). Instead, we're calling enforcement within the controller implementation to ensure things are easier to understand. It requires developers to be cognizant of how different token types affect the resources within an API. That said, coupling the policy name to the method name is no longer a requirement for keystone. Hopefully, that helps explain why we needed them to match. > > Also we have action API (i know from nova not sure from other services) > like POST /servers/{server_id}/action {addSecurityGroup} and their current > policy name is all inconsistent. few have policy name including their > resource name like "os_compute_api:os-flavor-access:add_tenant_access", few > has 'action' in policy name like > "os_compute_api:os-admin-actions:reset_state" and few has direct action > name like "os_compute_api:os-console-output" > Since the actions API relies on the request body and uses a single HTTP method, does it make sense to have the HTTP method in the policy name? It feels redundant, and we might be able to establish a convention that's more meaningful for things like action APIs. It looks like cinder has a similar pattern [0]. [0] https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action > > May be we can make them consistent with > :: or any better opinion. > > > From: Lance Bragstad > The topic of having > consistent policy names has popped up a few times this week. > > > > I would love to have this nailed down before we go through all the > policy rules again. In my head I hope in Nova we can go through each policy > rule and do the following: > > * move to new consistent policy name, deprecate existing name* hardcode > scope check to project, system or user** (user, yes... keypairs, yuck, but > its how they work)** deprecate in rule scope checks, which are largely > bogus in Nova anyway* make read/write/admin distinction** therefore adding > the "noop" role, amount other things > > + policy granularity. > > It is good idea to make the policy improvement all together and for all > rules as you mentioned. But my worries is how much load it will be on > operator side to migrate all policy rules at same time? What will be the > deprecation period etc which i think we can discuss on proposed spec - > https://review.openstack.org/#/c/547850 Yeah, that's another valid concern. I know at least one operator has weighed in already. I'm curious if operators have specific input here. It ultimately depends on if they override existing policies or not. If a deployment doesn't have any overrides, it should be a relatively simple change for operators to consume. > > > -gmann > > > Thanks,John > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Sep 21 14:26:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 21 Sep 2018 08:26:41 -0600 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: <0cac451b-6519-f0de-acb7-0703560b1f4d@gmail.com> Message-ID: <1537539983-sup-3232@lrrr.local> Excerpts from Elõd Illés's message of 2018-09-21 16:08:28 +0200: > Hi, > > Here is an etherpad with the teams that have stable:follow-policy tag on > their repos: > > https://etherpad.openstack.org/p/ocata-final-release-before-em > > On the links you can find reports about the open and unreleased changes, > that could be a useful input for the before-EM/final release. > Please have a look at the report (and review the open patches if there > are) so that a release can be made if necessary. > > Thanks, > > Előd Thanks for pulling all of this information together! Doug From tpb at dyncloud.net Fri Sep 21 14:37:05 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 21 Sep 2018 10:37:05 -0400 Subject: [openstack-dev] [manila] Stein PTG summary Message-ID: <20180921143705.pn5c7fj7sum5hae2@barron.net> We've summarized the manila PTG sessions in this etherpad [1] and I've included its contents below. Please feel free to supplement/correct as appropriate, or to follow up on this mailing list. -- Tom Barron (tbarron) We'll use this etherpad to distill AIs, focus areas, etc. from the Stein PTG Source: https://etherpad.openstack.org/p/manila-ptg-planning-denver-2018 Retrospective: * https://etherpad.openstack.org/p/manila-rocky-retrospective * Is Dustin willing to be our official bug czar [dustins] Yes! ++ * Should plan regional bug smash days and participate in and publish regional OpenStack/open source events * Need someone to lead? (See AI for tbarron) * Need more/earlier review attention for approved specs - PTL needs to keep attention on review priorities * Use our wiki rather than misc. etherpads to track work, review focus, liaisons, etc. * Add "Bug Czar", bug deputies info on the wiki Survey Results: https://docs.google.com/spreadsheets/d/1J83vnnuuADVwACeJq1g8snxMRM5VAfTbBD6svVeH4yg/edit#gid=843813633 Work planned for Stein: * governance goals * convert gate jobs to python3 * Assignee: vkmc * need to track progress, including 3rd party jobs, on the wiki * upgrade health checker (governance goal) * Assignee: ? * Rocky backlog * continue priority for access rules * Assignee: zhongjun * need to track driver side work on the wiki * open to continuing json schema request validation * Assignee: no one appears to be working it currently though * gouthamr will reach out to original authors and report status during the next week's weekly meeting * Manila CSI * Assignee: gouthamr, vkmc, tbarron * hodgepodge is setting up biweekly meeting to drive convergence of disparate efforts * https://etherpad.openstack.org/p/sig-k8s-2018-denver-ptg * Active-active share service * Assignee: gouthamr * we were going to wait for cinder action because of downstream tooz back end dependencies * but later Cinder decided to move ahead aggressively on this since it's needed for Edge distributed control plane topology so we may start work in manila on this in this cycle too * openstack client integration * Assignee: gouthamr will drive, distribute work * We might be able to get an Outreachy intern to help with this +++ * openstack sdk integration * Assignee: amito will drive, distribute work * We might be able to get an Outreachy intern to help with this +++ * telemetry extension * Assignee: vkmc * share usage meter, doc, testing * manage/unmanage for DHSS=True * Assignee: ganso * create share from snapshot in another pool/backend * Asignee: ganso * replication for DHSS=True * Assignee: ganso * OVS -> OVN * Asignee: tbarron/gouthamr * 3rd party backends may only work with OVS? * Manila UI Plugin * Assignee: vkmc * Features mapping: how outdated are we? * Selenium test enablement [dustins: I can help/do this] ++ * uWSGI enablement * Assignee: vkmc * By default in devstack? Agreements: * Hold off on storyboard migration till attachments issue is resolved * Don't need placement service but (like cinder) can use Jay Pipes DB generation synchronization technique to avoid scheduler races * Can publish info to placement if / when that is useful for nova * Use "low-hanging-fruit" as a bug tag on trivial bugs * Pro-active backport policy for non-vendor fixes (Also see AIs for gouthamr/ganso) * Vendor fixes need a +1 or +2 from a vendor reviewer if they are active Action Items: * tbarron: follow up on user survey question, how to refresh it, talk with manila community * tbarron: refurbish the manila wiki as central landing spot for review focus, roles (liaisons, bug czar, etc. rather than us maintaining miscellaneous etherpads that get lost. * tbarron: get agreement on mid-cycle meetup (maybe virtual) and America's bug smash +1 * ganso: post proposed stable branch policy for review & talk with gouthamr about it * gouthamr: post proposed revised review policy for review * gouthamr: drive question of graduation of Experimental features via weekly manila meeting/dev mail list * dustins: Send email to the Manila list to have people sign up for being "Bug Deputies" python3 AIs: * gouthamr : upgrade first party ubuntu jobs to 18.04 * vkmc: transition CentOS jobs to Fedora28 * ?: lvm job to Fedora28 or ubuntu 18.04 (if ipv6 export issue is fixed) * ?: track third party jobs * pramchan - talk to Mark Shuttleworth to get someone assigned to help * migrate the service image Testing AIs * gouthamr: write test for protocol selection in scheduler * ganso: review https://review.openstack.org/#/c/586456 & work with bug czar to get an assignee if needed * tbarron: add standing meeting topic on scenario test reviews * tbarron: develop 3rd party CI/support status report card * ganso: start list of test gaps * gouthamr/vkmc: review test organization vis a vis cert needs * tbarron: get discussion/review of https://review.openstack.org/#/c/510542/ - Limiting the number of shares per share server * gouthamr: talk to tempest folks to get a solution for managing share as a tenant * vkmc: work with e0ne from horizon on tempest integration test with the manila-ui plugin * vkmc/dustins: work with horizon on selenium test framework and add manila integration tests when it is stable Issues from Cinder to Reflect in Manila: * Relationship between Share Types and Availability Zones (AZs) * Mid-Cycle Meetups? Physical or Virtual? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From colleen at gazlene.net Fri Sep 21 15:36:27 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 21 Sep 2018 17:36:27 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 17 September 2018 Message-ID: <1537544187.1699981.1516172520.4184EDCE@webmail.messagingengine.com> # Keystone Team Update - Week of 17 September 2018 ## News ### PTG recaps The PTG was last week! Lance[1] and I[2] posted recaps of the keystone sessions. [1] https://www.lbragstad.com/blog/openstack-stein-ptg-keystone-summary [2] http://www.gazlene.net/denver-ptg-2.html ### No-op roles and default policy rules adriant started a discussion[3][4] about the difficulty with creating limited or no-op roles due to the fact that most OpenStack services have default policy rules of just "" which translates to "any role on any project". This means if you wanted to give a user access only to, for example, Swift, which uses its own ACL model, you have to craft all of your policy files for every other OpenStack service to not use "" since those rules would allow the Swift-only users access to those other services. The default role work that has been ongoing since last cycle and that will eventually turn into a cross-project community effort will help to alleviate this hardship for operators by making default policies use explicit roles like "reader" and "member", but this will require a long transition period. [3] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134886.html [4] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-09-19.log.html#t2018-09-19T21:45:30 ### Consistent policy names lbragstad started a thread to come to consensus on standard policy name conventions so that we can come up with guidance when it comes time to start migrating policies to use default roles. Vote for your favorite bikeshed color on the thread[5]. [5] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134597.html ## Open Specs Search query: https://bit.ly/2Pi6dGj knikolla started working on a refreshable app creds proposal which will be useful for federation and Edge use cases[6]. wxy is working on the next iteration of hierarchical limit models by adding domains to the mix[7]. lbragstad reproposed the JWT spec with additional details that we discussed at the PTG[8]. [6] https://review.openstack.org/604201 [7] https://review.openstack.org/599491 [8] https://review.openstack.org/541903 ## Recently Merged Changes Search query: https://bit.ly/2pquOwT (link updated to include oslo.limit) We merged 15 changes this week. ## Changes that need Attention Search query: https://bit.ly/2PUk84S (link updated to include oslo.limit) There are 50 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs This week we opened 6 new bugs and closed 3. Bugs opened (5) Bug #1793027 (keystone:Critical) opened by Morgan Fainberg https://bugs.launchpad.net/keystone/+bug/1793027 Bug #1793374 (keystone:Low) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1793374 Bug #1793421 (keystone:Low) opened by fupingxie https://bugs.launchpad.net/keystone/+bug/1793421 Bug #1792868 (keystone:Undecided) opened by Tao Li https://bugs.launchpad.net/keystone/+bug/1792868 Bug #1793347 (keystone:Undecided) opened by Tobias Urdin https://bugs.launchpad.net/keystone/+bug/1793347 Bugs fixed (3) Bug #1793027 (keystone:Critical) fixed by Morgan Fainberg https://bugs.launchpad.net/keystone/+bug/1793027 Bug #1754677 (keystone:High) fixed by Raildo Mascena de Sousa Filho https://bugs.launchpad.net/keystone/+bug/1754677 Bug #1431987 (keystone:Wishlist) fixed by no one https://bugs.launchpad.net/keystone/+bug/1431987 ## Milestone Outlook https://releases.openstack.org/stein/schedule.html Welcome to the Stein cycle! This cycle is a longer one so we have a bit of extra time between the spec freeze and feature freeze. lbragstad just updated the schedule so if you have issues with it we can probably still make adjustments. ## Shout-outs Vishakha Agarwal has been doing a lot of work tackling our bug backlog, thanks a lot for your hard work! ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From john.griffith8 at gmail.com Fri Sep 21 16:58:32 2018 From: john.griffith8 at gmail.com (John Griffith) Date: Fri, 21 Sep 2018 10:58:32 -0600 Subject: [openstack-dev] [docs][cinder] about cinder volume qos In-Reply-To: References: Message-ID: On Mon, Sep 10, 2018 at 2:22 PM Jay S Bryant wrote: > > > On 9/10/2018 7:17 AM, Rambo wrote: > > Hi,all > > At first,I find it is supported that we can define hard performance > limits for each volume in doc.openstack.org[1].But only can define hard > performance limits for each volume type in fact. Another, the note"As of > the Nova 18.0.0 Rocky release, front end QoS settings are only supported > when using the libvirt driver.",in fact, we have supported the front end > QoS settings when using the libvirt driver previous. Is the document > wrong?Can you tell me more about this ?Thank you very much. > > [1] > https://docs.openstack.org/cinder/latest/admin/blockstorage-basic-volume-qos.html > > > > Rambo, > > The performance limits are limited to a volume type as you need to have a > volume type to be able to associate a QoS type with it. So, that makes > sense. > > As for the documentation, it is a little confusing the way that is worded > but it isn't wrong. So, QoS support thus far, including Nova 18.0.0, front > end QoS setting only works with the libvirt driver. I don't interpret that > as meaning that there wasn't QoS support before that. > Right, the point is that now it's listed as supported ONLY on libvirt, as opposed to in the past it may have been supported on other hypervisors like hyper-v, xen etc. I don't know any of the details around how well those other implementations worked or what decisions were made but I just read the update as noting that currently only libvirt is supported, but not that anything has changed there. > > Jay > > > > > > > Best Regards > Rambo > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Sep 21 16:59:44 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 21 Sep 2018 11:59:44 -0500 Subject: [openstack-dev] [cinder] Proposed Changes to the Core Team ... In-Reply-To: <44a417c5-118e-f0ee-61a5-3eae398e64bb@gmail.com> References: <44a417c5-118e-f0ee-61a5-3eae398e64bb@gmail.com> Message-ID: <20180921165944.GA17331@sm-workstation> On Wed, Sep 19, 2018 at 08:43:24PM -0500, Jay S Bryant wrote: > All, > > In the last year we have had some changes to Core team participation.  This > was a topic of discussion at the PTG in Denver last week.  Based on that > discussion I have reached out to John Griffith and Winston D (Huang Zhiteng) > and asked if they felt they could continue to be a part of the Core Team.  > Both agreed that it was time to relinquish their titles. > > So, I am proposing to remove John Griffith and Winston D from Cinder Core.  > If I hear no concerns with this plan in the next week I will remove them. > > It is hard to remove people who have been so instrumental to the early days > of Cinder.  Your past contributions are greatly appreciated and the team > would be happy to have you back if circumstances every change. > > Sincerely, > Jay Bryant > Really sad to see Winston go as he's been a long time member, but I think over the last several releases it's been obvious he's had other priorities to compete with. It would be great if that were to change some day. He's made a lot of great contributions to Cinder over the years. I'm a little reluctant to make any changes with John though. We've spoken briefly. He definitely is off to other things now, but with how deeply he has been involved up until recently with things like the multiattach implementation, replication, and other significant things, I would much rather have him around but less active than completely gone. Having a few good reviews is worth a lot. I would propose we hold off on changing John's status for at least a cycle. He has indicated to me he would be willing to devote a little time to still doing reviews as his time allows, and I would hate to lose out on his expertise on changes to some things. Maybe we can give it a little more time and see if his other demands keep him too busy to participate and reevaluate later? Sean From tpb at dyncloud.net Fri Sep 21 17:03:05 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 21 Sep 2018 13:03:05 -0400 Subject: [openstack-dev] [manila] Team Photos Message-ID: <20180921170305.elwokuco4jpqxvf7@barron.net> Manila team photos from the recent Stein PTG in Denver [1]. -- Tom Barron (tbarron) [1] https://www.dropbox.com/sh/2pmvfkstudih2wf/AADI7Yo-wuJ2nmAIuYFEun5Ea/Manila?dl=0&subfolder_nav_tracking=1 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From john.griffith8 at gmail.com Fri Sep 21 17:06:34 2018 From: john.griffith8 at gmail.com (John Griffith) Date: Fri, 21 Sep 2018 11:06:34 -0600 Subject: [openstack-dev] [cinder] Proposed Changes to the Core Team ... In-Reply-To: <20180921165944.GA17331@sm-workstation> References: <44a417c5-118e-f0ee-61a5-3eae398e64bb@gmail.com> <20180921165944.GA17331@sm-workstation> Message-ID: On Fri, Sep 21, 2018 at 11:00 AM Sean McGinnis wrote: > On Wed, Sep 19, 2018 at 08:43:24PM -0500, Jay S Bryant wrote: > > All, > > > > In the last year we have had some changes to Core team participation. > This > > was a topic of discussion at the PTG in Denver last week. Based on that > > discussion I have reached out to John Griffith and Winston D (Huang > Zhiteng) > > and asked if they felt they could continue to be a part of the Core > Team. > > Both agreed that it was time to relinquish their titles. > > > > So, I am proposing to remove John Griffith and Winston D from Cinder > Core. > > If I hear no concerns with this plan in the next week I will remove them. > > > > It is hard to remove people who have been so instrumental to the early > days > > of Cinder. Your past contributions are greatly appreciated and the team > > would be happy to have you back if circumstances every change. > > > > Sincerely, > > Jay Bryant > > > > Really sad to see Winston go as he's been a long time member, but I think > over > the last several releases it's been obvious he's had other priorities to > compete with. It would be great if that were to change some day. He's made > a > lot of great contributions to Cinder over the years. > > I'm a little reluctant to make any changes with John though. We've spoken > briefly. He definitely is off to other things now, but with how deeply he > has > been involved up until recently with things like the multiattach > implementation, replication, and other significant things, I would much > rather > have him around but less active than completely gone. Having a few good > reviews > is worth a lot. > > I would propose we hold off on changing John's status for at least a > cycle. He > has indicated to me he would be willing to devote a little time to still > doing > reviews as his time allows, and I would hate to lose out on his expertise > on > changes to some things. Maybe we can give it a little more time and see if > his > other demands keep him too busy to participate and reevaluate later? > > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Hey Everyone, Now that I'm settling in on my other things I think I can still contribute a bit to Cinder on my own time. I'm still pretty fond of OpenStack and Cinder so would love the opportunity to give it a cycle to see if I can balance things and still be helpful. Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Sep 21 17:07:06 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 21 Sep 2018 12:07:06 -0500 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <1537539983-sup-3232@lrrr.local> References: <0cac451b-6519-f0de-acb7-0703560b1f4d@gmail.com> <1537539983-sup-3232@lrrr.local> Message-ID: <20180921170706.GB17331@sm-workstation> On Fri, Sep 21, 2018 at 08:26:41AM -0600, Doug Hellmann wrote: > Excerpts from Elõd Illés's message of 2018-09-21 16:08:28 +0200: > > Hi, > > > > Here is an etherpad with the teams that have stable:follow-policy tag on > > their repos: > > > > https://etherpad.openstack.org/p/ocata-final-release-before-em > > > > On the links you can find reports about the open and unreleased changes, > > that could be a useful input for the before-EM/final release. > > Please have a look at the report (and review the open patches if there > > are) so that a release can be made if necessary. > > > > Thanks, > > > > Előd > > Thanks for pulling all of this information together! > > Doug > Really useful information Előd - thanks for getting that put together! Sean From openstack at nemebean.com Fri Sep 21 17:10:06 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 21 Sep 2018 12:10:06 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-30 update In-Reply-To: <3205ef53-9021-ea3d-10cd-ab27f885a219@gmail.com> References: <3205ef53-9021-ea3d-10cd-ab27f885a219@gmail.com> Message-ID: On 09/15/2018 10:30 AM, Matt Riedemann wrote: > Just a couple of updates this week. > > * I have assigned PTLs (for projects that have PTLs [1]) to their > respective tasks in StoryBoard [2]. If someone else on your team is > planning on working on the pre-upgrade check goal then please just > reassign ownership of the task. > > * I have started going through some project release notes looking for > upgrade impacts and leaving notes in the task assigned per project. > There were some questions at the PTG about what some projects could add > for pre-upgrade checks so check your task to see if I've left any > thoughts. I have not gone through all projects yet. > > * Ben Nemec has extracted the common upgrade check CLI framework into a > library [3] (thanks Ben!) and is working on getting that imported into > Gerrit. It would be great if projects that start working on the goal can > try using that library and provide feedback. The library has been imported into Gerrit, so people can start consuming it from there. Note that there are quite a few pending patches to it already that we can't merge until the governance change goes in and I can populate the reviewer list. I also pushed a patch to demonstrate how it would be integrated with the existing Nova code[1] and a patch to Designate to demonstrate how a completely new set of checks might be implemented[2]. The Designate check required some changes in the library config handling, but I think the patches proposed should work well now. I need to write some unit tests for it, but in my manual testing it has behaved as expected. As Matt said, any feedback is welcome. I suggest working off the end of the series that includes [3] as there are some significant api changes there. -Ben 1: https://review.openstack.org/#/c/603499 2: https://review.openstack.org/#/c/604430 3: https://review.openstack.org/#/c/604422/ From jungleboyj at gmail.com Fri Sep 21 17:11:55 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 21 Sep 2018 12:11:55 -0500 Subject: [openstack-dev] [cinder] Proposed Changes to the Core Team ... In-Reply-To: References: <44a417c5-118e-f0ee-61a5-3eae398e64bb@gmail.com> <20180921165944.GA17331@sm-workstation> Message-ID: <46f140fa-0f83-8d92-3aed-0889cf47546d@gmail.com> On 9/21/2018 12:06 PM, John Griffith wrote: > > > > On Fri, Sep 21, 2018 at 11:00 AM Sean McGinnis > wrote: > > On Wed, Sep 19, 2018 at 08:43:24PM -0500, Jay S Bryant wrote: > > All, > > > > In the last year we have had some changes to Core team > participation.  This > > was a topic of discussion at the PTG in Denver last week.  Based > on that > > discussion I have reached out to John Griffith and Winston D > (Huang Zhiteng) > > and asked if they felt they could continue to be a part of the > Core Team. > > Both agreed that it was time to relinquish their titles. > > > > So, I am proposing to remove John Griffith and Winston D from > Cinder Core. > > If I hear no concerns with this plan in the next week I will > remove them. > > > > It is hard to remove people who have been so instrumental to the > early days > > of Cinder.  Your past contributions are greatly appreciated and > the team > > would be happy to have you back if circumstances every change. > > > > Sincerely, > > Jay Bryant > > > > Really sad to see Winston go as he's been a long time member, but > I think over > the last several releases it's been obvious he's had other > priorities to > compete with. It would be great if that were to change some day. > He's made a > lot of great contributions to Cinder over the years. > > I'm a little reluctant to make any changes with John though. We've > spoken > briefly. He definitely is off to other things now, but with how > deeply he has > been involved up until recently with things like the multiattach > implementation, replication, and other significant things, I would > much rather > have him around but less active than completely gone. Having a few > good reviews > is worth a lot. > > > > I would propose we hold off on changing John's status for at least > a cycle. He > has indicated to me he would be willing to devote a little time to > still doing > reviews as his time allows, and I would hate to lose out on his > expertise on > changes to some things. Maybe we can give it a little more time > and see if his > other demands keep him too busy to participate and reevaluate later? > > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Hey Everyone, > > Now that I'm settling in on my other things I think I can still > contribute a bit to Cinder on my own time.  I'm still pretty fond of > OpenStack and Cinder so would love the opportunity to give it a cycle > to see if I can balance things and still be helpful. > > Thanks, > John Sean, Thank you for your input on this and for following up with John. John, Glad that you are settling into your new position and think some time will free up for Cinder again.  I would be happy to have your continued input. I am removing you from consideration for removal. Jay (jungleboyj) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Fri Sep 21 17:20:07 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 21 Sep 2018 13:20:07 -0400 Subject: [openstack-dev] [manila] User Survey Results Message-ID: <20180921172007.ha5u6olopcnobhpt@barron.net> More PTG follow up :) The foundation shared results of the User Survey for Manila, where users were asked "Which OpenStack Shared File Systems (Manila) driver(s) are you using?" I've uploaded these in a Google Sheets document here [1]. The first tab has the raw results as passed to me by the Foundation, the second tabulates these, and the third summarizes with a bar chart. Do let me know if you see any errors :) -- Tom Barron (tbarron) [1] https://docs.google.com/spreadsheets/d/1J83vnnuuADVwACeJq1g8snxMRM5VAfTbBD6svVeH4yg/edit?usp=sharing -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jungleboyj at gmail.com Fri Sep 21 17:29:05 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 21 Sep 2018 12:29:05 -0500 Subject: [openstack-dev] [cinder] Mid-Cycle Planning ... Message-ID: <9e15e554-0fdf-2b20-b3c9-c7ebf966c185@gmail.com> Team, As we discussed at the PTG I have started an etherpad to do some planning for a possible Cinder Mid-cycle meeting.  Please check out the etherpad [1] and leave your input. Thanks! Jay [1] https://etherpad.openstack.org/p/cinder-stein-mid-cycle-planning From mrhillsman at gmail.com Fri Sep 21 17:55:09 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Fri, 21 Sep 2018 12:55:09 -0500 Subject: [openstack-dev] [Openstack-sigs] Capturing Feedback/Input In-Reply-To: <1537546393-sup-9882@lrrr.local> References: <1537540740-sup-4229@lrrr.local> <1537546393-sup-9882@lrrr.local> Message-ID: On Fri, Sep 21, 2018 at 11:16 AM Doug Hellmann wrote: > Excerpts from Melvin Hillsman's message of 2018-09-21 10:18:26 -0500: > > On Fri, Sep 21, 2018 at 9:41 AM Doug Hellmann > wrote: > > > > > Excerpts from Melvin Hillsman's message of 2018-09-20 17:30:32 -0500: > > > > Hey everyone, > > > > > > > > During the TC meeting at the PTG we discussed the ideal way to > capture > > > > user-centric feedback; particular from our various groups like SIGs, > WGs, > > > > etc. > > > > > > > > Options that were mentioned ranged from a wiki page to a standalone > > > > solution like discourse. > > > > > > > > While there is no perfect solution it was determined that Storyboard > > > could > > > > facilitate this. It would play out where there is a project group > > > > openstack-uc? and each of the SIGs, WGs, etc would have a project > under > > > > this group; if I am wrong someone else in the room correct me. > > > > > > > > The entire point is a first step (maybe final) in centralizing > > > user-centric > > > > feedback that does not require any extra overhead be it cost, time, > or > > > > otherwise. Just kicking off a discussion so others have a chance to > chime > > > > in before anyone pulls the plug or pushes the button on anything and > we > > > > settle as a community on what makes sense. > > > > > > > > > > I like the idea of tracking the information in storyboard. That > > > said, one of the main purposes of creating SIGs was to separate > > > those groups from the appearance that they were "managed" by the > > > TC or UC. So, rather than creating a UC-focused project group, if > > > we need a single project group at all, I would rather we call it > > > "SIGs" or something similar. > > > > > > > What you bring up re appearances makes sense definitely. Maybe we call it > > openstack-feedback since the purpose is focused on that and I actually > > looked at -uc as user-centric rather than user-committee; but > appearances :) > > Feedback implies that SIGs aren't engaged in creating OpenStack, though, > and I think that's the perception we're trying to change. > > > I think limiting it to SIGs will well, limit it to SIGs, and again could > > appear to be specific to those groups rather than for example the Public > > Cloud WG or Financial Team. > > OK, I thought those groups were SIGs. > > Maybe we're overthinking the organization on this. What is special about > the items that would be on this list compared to items opened directly > against projects? > Yeah unfortunately we do have a tendency to overthink/complicate things. Not saying Storyboard is the right tool but suggested rather than having something extra to maintain was what I understood. There are at least 3 things that were to be addressed: - single pane so folks know where to provide/see updates - it is not a catchall/dumpsite - something still needs to be flushed out/prioritized (Public Cloud WG's missing features spreadsheet for example) - not specific to a single project (i thought this was a given since there is already a process/workflow for single project) I could very well be wrong so I am open to be corrected. From my perspective the idea in the room was to not circumvent anything internal but rather make it easy for external viewers, passerbys, etc. When feedback is gathered from Ops Meetup, OpenStack Days, Local meetups/events, we discussed putting that here as well. > > Doug > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Sep 21 18:03:19 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 21 Sep 2018 19:03:19 +0100 (BST) Subject: [openstack-dev] [placement] update 18-38 Message-ID: HTML: https://anticdent.org/placement-update-18-38.html Here's a placement update. Last week there wasn't one, because of the PTG. There will be some references to various PTG stuff within but since we haven't fully resolved what the priorities will be, the discussion here will be somewhat unfocused. # Most Important Two main important things to do: As is typical (at least in my experience), last week we discussed and planned more work than anyone could be reasonably be expected to accomplish in a few years, let alone a single cycle, so there will be an inevitable winnowing and prioritizing of ideas and specs over the next few days. There's some discussion of priorities on an [etherpad](https://etherpad.openstack.org/p/nova-ptg-stein-priorities), but the details of which to do and how to implement are not fully resolved. Reviewing the specs (below) ought to help that. We're still working towards a complete set of integration and upgrade tests for the new placement repo. The unit and functional tests are happy and nicely fast, but they aren't covering important things like upgrading from placement-in-nova to just-placement, nor do they do any live testing with a full devstack. Work is in progress on all of this, see the "extraction" section below. # What's Changed We had a meeting to come up with a plan for migrating placement to an independent project. Mel wrote up a [summary email](http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html) with the steps. # Questions and Links (I've added "links" to this section because since there's a good one this week, why not?) * There was a demo at the PTG for the minimum bandwith work. That's been written up in a [blog post](https://rubasov.github.io/2018/09/21/openstack-qos-min-bw-demo.html). * Yesterday, belmoreira showed up in [#openstack-placement](http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2018-09-20.log.html#t2018-09-20T14:11:59) with some issues with expected resource providers not showing up in allocation candidates. This was traced back to `max_unit` for `VCPU` being locked at == `total` and hardware which had had SMT turned off now reporting fewer CPUs, thus being unable to accept existing large flavors. Discussion ensued about ways to potentially make `max_unit` more manageable by operators. The existing constraint is there for a reason (discussed in IRC) but that reason is not universally agreed. There are two issues with this: The "reason" is not universally agreed and we didn't resolve that. Also, management of `max_unit` of any inventory gets more complicated in a world of complex NUMA topologies. # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 17. No change (in number) from last time. * [In progress placement bugs](https://goo.gl/vzGGDQ) 10. Same as last time. # Specs New (or newly discovered) ones are at the end. Specs which have merged have been removed. As stated above: We still haven't solidified priorities, so some specs may merge as "low priority". * Account for host agg allocation ratio in placement (Still in rocky/) * Add subtree filter for GET /resource_providers * Resource provider - request group mapping in allocation candidate * VMware: place instances on resource pool (still in rocky/) * Standardize CPU resource tracking * Allow overcommit of dedicated CPU (Has an alternative which changes allocations to a float) * List resource providers having inventory * Bi-directional enforcement of traits * allow transferring ownership of instance * Modelling passthrough devices for report to placement * Propose counting quota usage from placement and API database (A bit out of date but may be worth resurrecting) * Spec: allocation candidates in tree * [WIP] generic device discovery policy * Nova Cyborg interaction specification. * supporting virtual NVDIMM devices * Spec: Support filtering by forbidden aggregate * Proposes NUMA topology with RPs * Support initial allocation ratios (There are at least two pending allocation ratio handling cleanup specs. It's not clear from the PTG etherpad which of these was chosen as the future (we did choose, but the etherpad is confusing). 544683 (above) is the other one.) * Count quota based on resource class # Main Themes These are interim themes while we work out what priorities are. ## Making Nested Useful An acknowledged outcome from the PTG was that we need to do the work to make workloads that want to use nested resource providers actually able to land on a host somewhere. This involves work across many parts of nova and could easily lead to a mass of bug fixes in placement. I'm probably missing a fair bit but the following topics are good starting points: * * * ## Consumer Generations gibi is still working hard to drive home support for consumer generations on the nova side. Because of some dependency management that stuff is currently in the following topic: * ## Extraction As mentioned above, getting the extracted placement happy is proceeding apace. Besides many of the generic cleanups happening [to the repo](https://review.openstack.org/#/q/project:openstack/placement+status:open) we need to focus some effort on upgrade and integration testing, docs publishing, and doc correctness. Dan has started a [database migration script](https://review.openstack.org/#/c/603234/) which will be used by deployers and grenade for upgrades. Matt is hoping to make some progress on the grenade side of things. I have a [hacked up devstack](https://review.openstack.org/#/c/600162/) for using the extracted placement. All of this is dependent on: * database migrations being "collapsed" * the existence of a `placement-manage` script to initialize the database I made a faked up [placement-manage](https://review.openstack.org/#/c/600161/) for the devstack patch above, but it only creates tables, doesn't migrate, and is not fit for purpose as a generic CLI. I have started [some experiments](https://review.openstack.org/#/c/601614/) on using [gabbi-tempest](https://pypi.org/project/gabbi-tempest/) to drive some integration tests for placement with solely gabbi YAML files. I initially did this using "legacy" style zuul jobs, and made it work, but it was ugly and I've since started using more modern zuul, but haven't yet made it work. # Other As with last time, I'm not going to make a list of links to pending changes that aren't already listed above. I'll start doing that again eventually (once priorities are more clear), but for now it is useful to look at [open placement patches](https://review.openstack.org/#/q/project:openstack/placement+status:open) and patches from everywhere which [mention placement in the commit message](https://review.openstack.org/#/q/message:placement+status:open). # End In case anyone is wondering where I am, I'm out M-W next week. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From johnsomor at gmail.com Fri Sep 21 18:11:53 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 21 Sep 2018 11:11:53 -0700 Subject: [openstack-dev] OpenStack Project Navigator In-Reply-To: <5BA4F940.7050608@tipit.net> References: <5BA4F940.7050608@tipit.net> Message-ID: Thank you Jimmy for making this available for updates. I was unable to find the code backing the project tags section of the Project Navigator pages. Our page is missing some upgrade tags and is showing duplicate "Stable branch policy" tags. https://www.openstack.org/software/releases/rocky/components/octavia Is there a different repository for the tags code? Thanks, Michael On Fri, Sep 21, 2018 at 6:59 AM Jimmy Mcarthur wrote: > > Following up on my (very brief) talk from the PTG, you can now propose > changes to the Project Navigator by going to > https://git.openstack.org/cgit/openstack/openstack-map repository > > Once your patch is merged, the page should reflect the changes straight > away. > > Cheers, > Jimmy > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Fri Sep 21 18:25:28 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 21 Sep 2018 13:25:28 -0500 Subject: [openstack-dev] [oslo] Storyboard test import done Message-ID: <9895456b-fba0-2611-a193-ac3e02a4ff10@nemebean.com> Hi, The test import of the Oslo projects is done (thanks Kendall!), so everyone from the Oslo team please take a look around and provide feedback on it. If we do the migration this cycle we want to do it earlier rather than later so we aren't dealing with migration fallout while trying to stabilize things at the end. https://storyboard-dev.openstack.org/#!/project_group/oslo Thanks. -Ben From fungi at yuggoth.org Fri Sep 21 19:24:32 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 21 Sep 2018 19:24:32 +0000 Subject: [openstack-dev] [Openstack-sigs] Capturing Feedback/Input In-Reply-To: References: <1537540740-sup-4229@lrrr.local> <1537546393-sup-9882@lrrr.local> Message-ID: <20180921192432.k23x2u3w7626cder@yuggoth.org> On 2018-09-21 12:55:09 -0500 (-0500), Melvin Hillsman wrote: [...] > Yeah unfortunately we do have a tendency to overthink/complicate > things. Not saying Storyboard is the right tool but suggested > rather than having something extra to maintain was what I > understood. There are at least 3 things that were to be addressed: > > - single pane so folks know where to provide/see updates Not all OpenStack projects use the same task trackers currently and there's no guarantee that they ever will, so this is a best effort only. Odds are you may wind up duplicating some information also present in the Nova project on Launchpad, the Tripleo project on Trello and the Foobly project on Bugzilla (I made this last one up, in case it's not obvious). > - it is not a catchall/dumpsite If it looks generic enough, it will become that unless there are people actively devoted to triaging and pruning submissions to curate them... a tedious and thankless long-term commitment, to be sure. > - something still needs to be flushed out/prioritized (Public > Cloud WG's missing features spreadsheet for example) This is definitely a good source of input, but still needs someone to determine which various projects/services the tasks for them get slotted into and then help prioritizing and managing spec submissions on a per-team basis. > - not specific to a single project (i thought this was a given > since there is already a process/workflow for single project) The way to do that on storyboard.openstack.org is to give it a project of its own. Basically just couple it to a new, empty Git repository and then the people doing these tasks still have the option of also putting that repository to some use later (for example, to house their workflow documentation). > I could very well be wrong so I am open to be corrected. From my > perspective the idea in the room was to not circumvent anything > internal but rather make it easy for external viewers, passerbys, > etc. When feedback is gathered from Ops Meetup, OpenStack Days, > Local meetups/events, we discussed putting that here as well. It seems a fine plan, just keep in mind that documenting and publishing feedback doesn't magically translate into developers acting on any of it (and this is far from the first time it's been attempted). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Fri Sep 21 20:04:50 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 21 Sep 2018 15:04:50 -0500 Subject: [openstack-dev] OpenStack Project Navigator In-Reply-To: References: <5BA4F940.7050608@tipit.net> Message-ID: <8f883bec-2663-0072-65e3-3cd65bd6660f@gmail.com> On 9/21/2018 1:11 PM, Michael Johnson wrote: > Thank you Jimmy for making this available for updates. > > I was unable to find the code backing the project tags section of the > Project Navigator pages. > Our page is missing some upgrade tags and is showing duplicate "Stable > branch policy" tags. > > https://www.openstack.org/software/releases/rocky/components/octavia > > Is there a different repository for the tags code? Those are down in the project details section of the page, look to the right and there is a 'tag details' column. The tags are descriptive and link to the details on each tag. -- Thanks, Matt From mriedemos at gmail.com Fri Sep 21 20:53:50 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 21 Sep 2018 15:53:50 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-29 Update Message-ID: <2ce541ea-7f2c-6d12-2831-3a658e69e52e@gmail.com> Updates for this week: * As bnemec noted in the last update [1], he's making some progress with the oslo.upgradecheck library. He's retrofitting the nova-status upgrade check code to use the library and has a patch up for designate to use it. * The only two projects that I'm aware of with patches up at this point are monasca [2] and designate [3]. The monasca one is tricky because as I've found going through release notes for some projects, they don't really have any major upgrade impacts so writing checks is not obvious. I don't have a great solution here. What monasca has done is add the framework with a noop check. If others are in the same situation, I'd like to hear your thoughts on what you think makes sense here. The alternative is these projects opt out of the goal for Stein and just add the check code later when it makes sense (but people might forget or not care to do that later if it's not a goal). * The reference docs I wrote for writing upgrade checks is published now [4]. As I've been answering some questions in storyboard and IRC, it's obvious that I need to add some FAQs into those docs because I've taken some of this for granted on how it works in nova, so I'll push a docs change for some of that as well and link it back into the story. As always, feel free to reach out to me with any questions or issues you might have with completing this goal (or just getting started). [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134972.html [2] https://review.openstack.org/#/c/603465/ [3] https://review.openstack.org/#/c/604430/ [4] https://docs.openstack.org/nova/latest/reference/upgrade-checks.html -- Thanks, Matt From openstack at nemebean.com Fri Sep 21 21:19:35 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 21 Sep 2018 16:19:35 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-29 Update In-Reply-To: <2ce541ea-7f2c-6d12-2831-3a658e69e52e@gmail.com> References: <2ce541ea-7f2c-6d12-2831-3a658e69e52e@gmail.com> Message-ID: On 09/21/2018 03:53 PM, Matt Riedemann wrote: > Updates for this week: > > * As bnemec noted in the last update [1], he's making some progress with > the oslo.upgradecheck library. He's retrofitting the nova-status upgrade > check code to use the library and has a patch up for designate to use it. > > * The only two projects that I'm aware of with patches up at this point > are monasca [2] and designate [3]. The monasca one is tricky because as > I've found going through release notes for some projects, they don't > really have any major upgrade impacts so writing checks is not obvious. > I don't have a great solution here. What monasca has done is add the > framework with a noop check. If others are in the same situation, I'd > like to hear your thoughts on what you think makes sense here. The > alternative is these projects opt out of the goal for Stein and just add > the check code later when it makes sense (but people might forget or not > care to do that later if it's not a goal). My inclination is for the command to exist with a noop check, the main reason being that if we create it for everyone this cycle then the deployment tools can implement calls to the status commands all at once. If we wait until checks are needed then someone has to not only implement it in the service but also remember to go update all of the deployment tools. Implementing a noop check should be pretty trivial with the library so it isn't a huge imposition. > > * The reference docs I wrote for writing upgrade checks is published now > [4]. As I've been answering some questions in storyboard and IRC, it's > obvious that I need to add some FAQs into those docs because I've taken > some of this for granted on how it works in nova, so I'll push a docs > change for some of that as well and link it back into the story. > > As always, feel free to reach out to me with any questions or issues you > might have with completing this goal (or just getting started). > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134972.html > > [2] https://review.openstack.org/#/c/603465/ > [3] https://review.openstack.org/#/c/604430/ > [4] https://docs.openstack.org/nova/latest/reference/upgrade-checks.html > From johnsomor at gmail.com Fri Sep 21 21:32:30 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 21 Sep 2018 14:32:30 -0700 Subject: [openstack-dev] OpenStack Project Navigator In-Reply-To: <8f883bec-2663-0072-65e3-3cd65bd6660f@gmail.com> References: <5BA4F940.7050608@tipit.net> <8f883bec-2663-0072-65e3-3cd65bd6660f@gmail.com> Message-ID: Matt, I'm a bit confused by your response. I'm not looking for a definition of the tags, that is very clear. I'm looking for the source code backing the page that is rendering which tags a project has. This code appears to be broken and not rendering the tags correctly and I wanted to see if I could fix it. Michael On Fri, Sep 21, 2018 at 1:05 PM Matt Riedemann wrote: > > On 9/21/2018 1:11 PM, Michael Johnson wrote: > > Thank you Jimmy for making this available for updates. > > > > I was unable to find the code backing the project tags section of the > > Project Navigator pages. > > Our page is missing some upgrade tags and is showing duplicate "Stable > > branch policy" tags. > > > > https://www.openstack.org/software/releases/rocky/components/octavia > > > > Is there a different repository for the tags code? > > Those are down in the project details section of the page, look to the > right and there is a 'tag details' column. The tags are descriptive and > link to the details on each tag. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Fri Sep 21 21:49:12 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 21 Sep 2018 16:49:12 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-29 Update In-Reply-To: <2ce541ea-7f2c-6d12-2831-3a658e69e52e@gmail.com> References: <2ce541ea-7f2c-6d12-2831-3a658e69e52e@gmail.com> Message-ID: <6c638382-7769-4166-7348-807bc9898ceb@gmail.com> On 9/21/2018 3:53 PM, Matt Riedemann wrote: > * The reference docs I wrote for writing upgrade checks is published now > [4]. As I've been answering some questions in storyboard and IRC, it's > obvious that I need to add some FAQs into those docs because I've taken > some of this for granted on how it works in nova, so I'll push a docs > change for some of that as well and link it back into the story. https://review.openstack.org/#/c/604486/ for anyone that thinks I missed something. -- Thanks, Matt From doug at doughellmann.com Fri Sep 21 23:06:16 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 21 Sep 2018 19:06:16 -0400 Subject: [openstack-dev] [Openstack-sigs] Capturing Feedback/Input In-Reply-To: References: <1537540740-sup-4229@lrrr.local> <1537546393-sup-9882@lrrr.local> Message-ID: Melvin Hillsman writes: > On Fri, Sep 21, 2018 at 11:16 AM Doug Hellmann > wrote: > >> Maybe we're overthinking the organization on this. What is special about >> the items that would be on this list compared to items opened directly >> against projects? >> > > Yeah unfortunately we do have a tendency to overthink/complicate things. > Not saying Storyboard is the right tool but suggested rather than having > something extra to maintain was what I understood. There are at least 3 > things that were to be addressed: > > - single pane so folks know where to provide/see updates > - it is not a catchall/dumpsite > - something still needs to be flushed out/prioritized (Public Cloud WG's > missing features spreadsheet for example) > - not specific to a single project (i thought this was a given since there > is already a process/workflow for single project) > > I could very well be wrong so I am open to be corrected. From my > perspective the idea in the room was to not circumvent anything internal > but rather make it easy for external viewers, passerbys, etc. When feedback > is gathered from Ops Meetup, OpenStack Days, Local meetups/events, we > discussed putting that here as well. Those are all good points. Sorry for making you rehash stuff that was already discussed. So I guess the idea is to have a place where several groups can track their own backlogs, but then a prioritized list can be created from those separate backlogs? In the Storyboard data model, that sounds like separate projects for each SIG or WG, and then 1 worklist that they all manually update with their priority items. I say "manually" because if we just combine all of the backlogs we don't have any good way to order the items and select the top N. Doug From jimmy at openstack.org Fri Sep 21 23:21:49 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 21 Sep 2018 18:21:49 -0500 Subject: [openstack-dev] OpenStack Project Navigator In-Reply-To: References: <5BA4F940.7050608@tipit.net> <8f883bec-2663-0072-65e3-3cd65bd6660f@gmail.com> Message-ID: <5BA57D0D.5080004@openstack.org> The TC tags are indeed in a different repo: https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml Let me know if this makes sense. Jimmy > Michael Johnson > September 21, 2018 at 4:32 PM > Matt, > > I'm a bit confused by your response. I'm not looking for a definition > of the tags, that is very clear. > > I'm looking for the source code backing the page that is rendering > which tags a project has. > This code appears to be broken and not rendering the tags correctly > and I wanted to see if I could fix it. > > Michael > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Matt Riedemann > September 21, 2018 at 3:04 PM > > > Those are down in the project details section of the page, look to the > right and there is a 'tag details' column. The tags are descriptive > and link to the details on each tag. > > Michael Johnson > September 21, 2018 at 1:11 PM > Thank you Jimmy for making this available for updates. > > I was unable to find the code backing the project tags section of the > Project Navigator pages. > Our page is missing some upgrade tags and is showing duplicate "Stable > branch policy" tags. > > https://www.openstack.org/software/releases/rocky/components/octavia > > Is there a different repository for the tags code? > > Thanks, > Michael > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy Mcarthur > September 21, 2018 at 8:59 AM > Following up on my (very brief) talk from the PTG, you can now propose > changes to the Project Navigator by going to > https://git.openstack.org/cgit/openstack/openstack-map repository > > Once your patch is merged, the page should reflect the changes > straight away. > > Cheers, > Jimmy > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jankihc91 at gmail.com Fri Sep 21 23:21:52 2018 From: jankihc91 at gmail.com (Janki Chhatbar) Date: Sat, 22 Sep 2018 04:51:52 +0530 Subject: [openstack-dev] [Tripleo] Automating role generation In-Reply-To: References: <265867de-601f-6498-dc7f-4b50bf03904d@redhat.com> Message-ID: Hi All As per the discussion at PTG, I have filed a BP [1]. I will push a spec sometime around mid-October. [1]. https://blueprints.launchpad.net/tripleo/+spec/automatic-role-generation On Tue, Sep 4, 2018 at 2:56 PM Steven Hardy wrote: > On Tue, Sep 4, 2018 at 9:48 AM, Jiří Stránský wrote: > > On 4.9.2018 08:13, Janki Chhatbar wrote: > >> > >> Hi > >> > >> I am looking to automate role file generation in TripleO. The idea is > >> basically for an operator to create a simple yaml file (operator.yaml, > >> say) > >> listing services that are needed and then TripleO to generate > >> Controller.yaml enabling only those services that are mentioned. > >> > >> For example: > >> operator.yaml > >> services: > >> Glance > >> OpenDaylight > >> Neutron ovs agent > > > > > > I'm not sure it's worth introducing a new file format as such, if the > > purpose is essentially to expand e.g. "Glance" into > > "OS::TripleO::Services::GlanceApi" and > > "OS::TripleO::Services::GlanceRegistry"? It would be another layer of > > indirection (additional mental work for the operator who wants to > understand > > how things work), while the layer doesn't make too much difference in > > preparation of the role. At least that's my subjective view. > > > >> > >> Then TripleO should > >> 1. Fail because ODL and OVS agent are either-or services > > > > > > +1 i think having something like this would be useful. > > > >> 2. After operator.yaml is modified to remove Neutron ovs agent, it > should > >> generate Controller.yaml with below content > >> > >> ServicesDefault: > >> - OS::TripleO::Services::GlanceApi > >> - OS::TripleO::Services::GlanceRegistry > >> - OS::TripleO::Services::OpenDaylightApi > >> - OS::TripleO::Services::OpenDaylightOvs > >> > >> Currently, operator has to manually edit the role file (specially when > >> deployed with ODL) and I have seen many instances of failing deployment > >> due > >> to variations of OVS, OVN and ODL services enabled when they are > actually > >> exclusive. > > > > > > Having validations on the service list would be helpful IMO, e.g. "these > > services must not be in one deployment together", "these services must > not > > be in one role together", "these services must be together", "we > recommend > > this service to be in every role" (i'm thinking TripleOPackages, Ntp, > ...) > > etc. But as mentioned above, i think it would be better if we worked > > directly with the "OS::TripleO::Services..." values rather than a new > layer > > of proxy-values. > > > > Additional random related thoughts: > > > > * Operator should still be able to disobey what the validation suggests, > if > > they decide so. > > > > * Would be nice to have the info about particular services (e.g what > can't > > be together) specified declaratively somewhere (TripleO's favorite thing > in > > the world -- YAML?). > > > > * We could start with just one type of validation, e.g. the mutual > > exclusivity rule for ODL vs. OVS, but would be nice to have the solution > > easily expandable for new rule types. > > This is similar to how the UI uses the capabilities-map.yaml, so > perhaps we can use that as the place to describe service dependencies > and conflicts? > > > https://github.com/openstack/tripleo-heat-templates/blob/master/capabilities-map.yaml > > Currently this isn't used at all for the CLI, but I can imagine some > kind of wizard interface being useful, e.g you could say enable > "Glance" group and it'd automatically pull in all glance dependencies? > > Another thing to mention is this doesn't necessarily have to generate > a new role (although it could), the *Services parameter for existing > roles can be overridden, so it might be simpler to generate an > environment file instead. > > Steve > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Thanking you Janki Chhatbar OpenStack | Docker | SDN simplyexplainedblog.wordpress.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Sat Sep 22 00:41:26 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 21 Sep 2018 17:41:26 -0700 Subject: [openstack-dev] OpenStack Project Navigator In-Reply-To: <5BA57D0D.5080004@openstack.org> References: <5BA4F940.7050608@tipit.net> <8f883bec-2663-0072-65e3-3cd65bd6660f@gmail.com> <5BA57D0D.5080004@openstack.org> Message-ID: Jimmy, Yes, the tags are correct in https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml but are not correct in Project Navigator. I am asking where is the git repository with the Project Navigator code that creates the "Project Details" section? I am looking for the Project Navigator source code. Michael On Fri, Sep 21, 2018 at 4:22 PM Jimmy McArthur wrote: > > The TC tags are indeed in a different repo: https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml > > Let me know if this makes sense. > > Jimmy > > Michael Johnson September 21, 2018 at 4:32 PM > Matt, > > I'm a bit confused by your response. I'm not looking for a definition > of the tags, that is very clear. > > I'm looking for the source code backing the page that is rendering > which tags a project has. > This code appears to be broken and not rendering the tags correctly > and I wanted to see if I could fix it. > > Michael > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Matt Riedemann September 21, 2018 at 3:04 PM > > > Those are down in the project details section of the page, look to the right and there is a 'tag details' column. The tags are descriptive and link to the details on each tag. > > Michael Johnson September 21, 2018 at 1:11 PM > Thank you Jimmy for making this available for updates. > > I was unable to find the code backing the project tags section of the > Project Navigator pages. > Our page is missing some upgrade tags and is showing duplicate "Stable > branch policy" tags. > > https://www.openstack.org/software/releases/rocky/components/octavia > > Is there a different repository for the tags code? > > Thanks, > Michael > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy Mcarthur September 21, 2018 at 8:59 AM > Following up on my (very brief) talk from the PTG, you can now propose changes to the Project Navigator by going to https://git.openstack.org/cgit/openstack/openstack-map repository > > Once your patch is merged, the page should reflect the changes straight away. > > Cheers, > Jimmy > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From nguyentrihai93 at gmail.com Sat Sep 22 01:22:57 2018 From: nguyentrihai93 at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gVHLDrSBI4bqjaQ==?=) Date: Sat, 22 Sep 2018 10:22:57 +0900 Subject: [openstack-dev] [goals][python3][karbor] Please process thepython3-first patches In-Reply-To: References: Message-ID: Hi, Thanks for reviewing them. Only two patches need to review: https://review.openstack.org/#/c/594827/ https://review.openstack.org/#/c/594815/ (Depends-On: https://review.openstack.org/#/c/596289/) On Fri, Sep 21, 2018 at 10:33 PM jiaopengju wrote: > Thanks for pushing these patches, we will review and merge them ASAP. > > > 原始邮件 > *发件人:* Nguyễn Trí Hải > *收件人:* OpenStack Development Mailing List (not for usage questions)< > openstack-dev at lists.openstack.org> > *抄送:* jiaopengju > *发送时间:* 2018年9月21日(周五) 20:36 > *主题:* [openstack-dev][goals][python3][karbor] Please process > thepython3-first patches > > Hi Karbor team and Karbor PTL, > > As part of the "Run under Python 3 by default" community goal [1] for > OpenStack in the Stein cycle, I proposed the patches related to > python3-first goal very long time ago. However, there is no activity for > those patches. > > Please receive those patches and review them. Those patches belong to: > - openstack/karbor > - openstack/karbor-dashboard > - openstack/python-karborclient > > Here they are: > https://review.openstack.org/#/q/project:%255E.*karbor.*+topic:python3-first+status:open > > [1] https://governance.openstack.org/tc/goals/stein/python3-first.html > > -- > > Nguyen Tri Hai / Ph.D. Student > > ANDA Lab., Soongsil Univ., Seoul, South Korea > > > > *[image: > http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4] > * > -- Nguyen Tri Hai / Ph.D. Student ANDA Lab., Soongsil Univ., Seoul, South Korea *[image: http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4] * -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Sat Sep 22 03:55:14 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Sat, 22 Sep 2018 12:55:14 +0900 Subject: [openstack-dev] [neutron] heads up to long time ovs users... In-Reply-To: <20180921013656.31737B3BA8@mail.valinux.co.jp> References: <20180921013656.31737B3BA8@mail.valinux.co.jp> Message-ID: The important point of this notice is that packet drops will happen when switching of_interface option from ovs-ofctl (which was the default value in the old releases) to native (which is the current default ). Once neutron drops the option, if deployers use the legacy value "ovs-ofctl", they will hit some packet losses when upgrading neutron to Stein. We have no actual data on large deployments so far and don't know how this change impacts real deployments. Your feedback would be really appreciated. Best regards, Akihiro Motoki (irc: amotoki) 2018年9月21日(金) 10:37 IWAMOTO Toshihiro : > The neutron team is finally removing the ovs-ofctl option. > > https://review.openstack.org/#/c/599496/ > > The ovs-ofctl of_interface option wasn't default since Newton and was > deprecated in Pike. > > So, if you are a long time ovs-agent user and upgrading to a new > coming release, you must switch from the ovs-ofctl implementation to > the native implementation and are affected by the following issue. > > https://bugs.launchpad.net/neutron/+bug/1793354 > > The loss of communication mentioned in this bug report would be a few > seconds to a few minutes depending on the number of network > interfaces. It happens when an ovs-agent is restarted with the new > of_interface (so only once during the upgrade) and persists until the > network interfaces are set up. > > Please speak up if you cannot tolerate this during upgrades. > > IIUC, this bug is unfixable and I'd like to move forward as > maintaining two of_interface implementation is a burden for the > neutron team. > > -- > IWAMOTO Toshihiro > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianyrchoi at gmail.com Sat Sep 22 14:32:06 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Sat, 22 Sep 2018 23:32:06 +0900 Subject: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core In-Reply-To: References: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> <4f413d36-463e-477a-9886-79bf55df677c@suse.com> <07fcbf71a9406e8d7b918b238377d503@arcor.de> Message-ID: <0aa3ebd2-82d4-4a60-7162-c974c2d6449c@gmail.com> Thanks a lot all for such nomination & agreement! I would like to do my best after I become doc-core as like what I current do, although I still need the help from so many kind, energetic, and enthusiastic OpenStack contributors and core members on OpenStack documentation and so many projects. With many thanks, /Ian Melvin Hillsman wrote on 9/21/2018 5:31 AM: > ++ > > On Thu, Sep 20, 2018 at 3:11 PM Frank Kloeker > wrote: > > Am 2018-09-19 20:54, schrieb Andreas Jaeger: > > On 2018-09-19 20:50, Petr Kovar wrote: > >> Hi all, > >> > >> Based on our PTG discussion, I'd like to nominate Ian Y. Choi for > >> membership in the openstack-doc-core team. I think Ian doesn't > need an > >> introduction, he's been around for a while, recently being deeply > >> involved > >> in infra work to get us robust support for project team docs > >> translation and > >> PDF builds. > >> > >> Having Ian on the core team will also strengthen our > integration with > >> the i18n community. > >> > >> Please let the ML know should you have any objections. > > > > The opposite ;), heartly agree with adding him, > > > > Andreas > > ++ > > Frank > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ianyrchoi at gmail.com Sat Sep 22 15:22:08 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Sun, 23 Sep 2018 00:22:08 +0900 Subject: [openstack-dev] OpenStack Project Navigator In-Reply-To: References: <5BA4F940.7050608@tipit.net> <8f883bec-2663-0072-65e3-3cd65bd6660f@gmail.com> <5BA57D0D.5080004@openstack.org> Message-ID: <44dcd52a-7f37-aa9a-da98-adfcd0e1ef5d@gmail.com> It would be very nice if the slides of Jimmy's presentation on Wednesday during PTG + other slides are shared via this + other (e.g., openstack, community, ...) mailing lists. According to what I discussed with Jimmy on Sep 11, a day before on his presentation, and what I learned from seeing some stuff, I can say that: - The repo is https://github.com/OpenStackweb/openstack-org and the target directory would be https://github.com/OpenStackweb/openstack-org/tree/master/software but it definitely seems that data is managed via an external database, not a yaml file when I skimmed the repository and sources so briefly. - Foundation shared that Foundation would do best to aggregate scattered definitions for consistency, and find a nice place for all official projects like Docs, i18n, Congress, etc... on the Software/Project Navigator. [1]. With many thanks, /Ian [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134883.html Michael Johnson wrote on 9/22/2018 9:41 AM: > Jimmy, > > Yes, the tags are correct in > https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml > but are not correct in Project Navigator. > > I am asking where is the git repository with the Project Navigator > code that creates the "Project Details" section? > I am looking for the Project Navigator source code. > > Michael > On Fri, Sep 21, 2018 at 4:22 PM Jimmy McArthur wrote: >> The TC tags are indeed in a different repo: https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml >> >> Let me know if this makes sense. >> >> Jimmy >> >> Michael Johnson September 21, 2018 at 4:32 PM >> Matt, >> >> I'm a bit confused by your response. I'm not looking for a definition >> of the tags, that is very clear. >> >> I'm looking for the source code backing the page that is rendering >> which tags a project has. >> This code appears to be broken and not rendering the tags correctly >> and I wanted to see if I could fix it. >> >> Michael >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Matt Riedemann September 21, 2018 at 3:04 PM >> >> >> Those are down in the project details section of the page, look to the right and there is a 'tag details' column. The tags are descriptive and link to the details on each tag. >> >> Michael Johnson September 21, 2018 at 1:11 PM >> Thank you Jimmy for making this available for updates. >> >> I was unable to find the code backing the project tags section of the >> Project Navigator pages. >> Our page is missing some upgrade tags and is showing duplicate "Stable >> branch policy" tags. >> >> https://www.openstack.org/software/releases/rocky/components/octavia >> >> Is there a different repository for the tags code? >> >> Thanks, >> Michael >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy Mcarthur September 21, 2018 at 8:59 AM >> Following up on my (very brief) talk from the PTG, you can now propose changes to the Project Navigator by going to https://git.openstack.org/cgit/openstack/openstack-map repository >> >> Once your patch is merged, the page should reflect the changes straight away. >> >> Cheers, >> Jimmy >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Sat Sep 22 16:15:34 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 22 Sep 2018 11:15:34 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-29 Update In-Reply-To: References: <2ce541ea-7f2c-6d12-2831-3a658e69e52e@gmail.com> Message-ID: <39a23c25-eed0-fa1e-0afd-14465f35ee14@gmail.com> On 9/21/2018 4:19 PM, Ben Nemec wrote: >> * The only two projects that I'm aware of with patches up at this >> point are monasca [2] and designate [3]. The monasca one is tricky >> because as I've found going through release notes for some projects, >> they don't really have any major upgrade impacts so writing checks is >> not obvious. I don't have a great solution here. What monasca has done >> is add the framework with a noop check. If others are in the same >> situation, I'd like to hear your thoughts on what you think makes >> sense here. The alternative is these projects opt out of the goal for >> Stein and just add the check code later when it makes sense (but >> people might forget or not care to do that later if it's not a goal). > > My inclination is for the command to exist with a noop check, the main > reason being that if we create it for everyone this cycle then the > deployment tools can implement calls to the status commands all at once. > If we wait until checks are needed then someone has to not only > implement it in the service but also remember to go update all of the > deployment tools. Implementing a noop check should be pretty trivial > with the library so it isn't a huge imposition. Yeah, I agree, and I've left comments on the patch to give some ideas on how to write the noop check with a description that explains it's an initial check but doesn't really do anything. The alternative would be to dump the table header for the results but then not have any rows, which could be more confusing. -- Thanks, Matt From tony at bakeyournoodle.com Sat Sep 22 16:54:20 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sat, 22 Sep 2018 11:54:20 -0500 Subject: [openstack-dev] [all] Nominations for the "T" Release name Message-ID: <20180922165419.GD5096@thor.bakeyournoodle.com> Hey everybody, Once again, it is time for us to pick a name for our "T" release. Since the associated Summit will be in Denver, the Geographic Location has been chosen as "Colorado" (State). Nominations are now open. Please add suitable names to https://wiki.openstack.org/wiki/Release_Naming/T_Proposals between now and 2018-10-15 23:59 UTC. In case you don't remember the rules: * Each release name must start with the letter of the ISO basic Latin alphabet following the initial letter of the previous release, starting with the initial release of "Austin". After "Z", the next name should start with "A" again. * The name must be composed only of the 26 characters of the ISO basic Latin alphabet. Names which can be transliterated into this character set are also acceptable. * The name must refer to the physical or human geography of the region encompassing the location of the OpenStack design summit for the corresponding release. The exact boundaries of the geographic region under consideration must be declared before the opening of nominations, as part of the initiation of the selection process. * The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so "Foo City" or "Foo Peak" would both be eligible as "Foo". Names which do not meet these criteria but otherwise sound really cool should be added to a separate section of the wiki page and the TC may make an exception for one or more of them to be considered in the Condorcet poll. The naming official is responsible for presenting the list of exceptional names for consideration to the TC before the poll opens. Let the naming begin. Tony. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jimmy at openstack.org Sat Sep 22 17:26:45 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Sat, 22 Sep 2018 12:26:45 -0500 Subject: [openstack-dev] OpenStack Project Navigator In-Reply-To: References: <5BA4F940.7050608@tipit.net> <8f883bec-2663-0072-65e3-3cd65bd6660f@gmail.com> <5BA57D0D.5080004@openstack.org> Message-ID: <5BA67B55.1060609@openstack.org> We ingest the data from yaml files, at least once per day. We had a bug yesterday that was still reading from the old @ttx repo i/o of the new openstack-map repo on git.openstack.org. That might explain some of the issues you were seeing with the project details. I just pushed a fix this morning, so let me know if that cleared up the problem. The code for the software pages on openstack.org can be found here: https://github.com/OpenStackweb/openstack-org/tree/master/software/code The yaml that feeds Project Navigator (Project name, project details, etc...) https://git.openstack.org/cgit/openstack/openstack-map The yaml that feeds the project tags can be found here: https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml The slides from the PTG presentation can be found here: https://drive.google.com/file/d/10D-h9uJ456fphluc478KZUJZew0XiiO-/view?usp=sharing Let me know if I can provide additional details. Thanks, Jimmy > Michael Johnson > September 21, 2018 at 7:41 PM > Jimmy, > > Yes, the tags are correct in > https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml > but are not correct in Project Navigator. > > I am asking where is the git repository with the Project Navigator > code that creates the "Project Details" section? > I am looking for the Project Navigator source code. > > Michael > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > September 21, 2018 at 6:21 PM > The TC tags are indeed in a different repo: > https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml > > Let me know if this makes sense. > > Jimmy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Michael Johnson > September 21, 2018 at 4:32 PM > Matt, > > I'm a bit confused by your response. I'm not looking for a definition > of the tags, that is very clear. > > I'm looking for the source code backing the page that is rendering > which tags a project has. > This code appears to be broken and not rendering the tags correctly > and I wanted to see if I could fix it. > > Michael > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Matt Riedemann > September 21, 2018 at 3:04 PM > > > Those are down in the project details section of the page, look to the > right and there is a 'tag details' column. The tags are descriptive > and link to the details on each tag. > > Michael Johnson > September 21, 2018 at 1:11 PM > Thank you Jimmy for making this available for updates. > > I was unable to find the code backing the project tags section of the > Project Navigator pages. > Our page is missing some upgrade tags and is showing duplicate "Stable > branch policy" tags. > > https://www.openstack.org/software/releases/rocky/components/octavia > > Is there a different repository for the tags code? > > Thanks, > Michael > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianyrchoi at gmail.com Sat Sep 22 21:10:09 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Sun, 23 Sep 2018 06:10:09 +0900 Subject: [openstack-dev] OpenStack Project Navigator In-Reply-To: <5BA67B55.1060609@openstack.org> References: <5BA4F940.7050608@tipit.net> <8f883bec-2663-0072-65e3-3cd65bd6660f@gmail.com> <5BA57D0D.5080004@openstack.org> <5BA67B55.1060609@openstack.org> Message-ID: Thanks a lot for sharing presentation slides and more detail explanation, which are very helpful for me to understand more context. I thought that the data were in some external database not in the repo, and now clearly figure out how it syncs. [1] With many thanks, /Ian [1] https://github.com/OpenStackweb/openstack-org/commit/d8d691848a1677c0a96ddd3029bec645dec8e1f7#diff-fbc5b77edab58d4fe24dd4f7644cc62b Jimmy McArthur wrote on 9/23/2018 2:26 AM: > We ingest the data from yaml files, at least once per day.  We had a > bug yesterday that was still reading from the old @ttx repo i/o of the > new openstack-map repo on git.openstack.org.  That might explain some > of the issues you were seeing with the project details.  I just pushed > a fix this morning, so let me know if that cleared up the problem. > > The code for the software pages on openstack.org can be found here: > https://github.com/OpenStackweb/openstack-org/tree/master/software/code > > The yaml that feeds Project Navigator (Project name, project details, > etc...) > https://git.openstack.org/cgit/openstack/openstack-map > > The yaml that feeds the project tags can be found here: > https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml > > The slides from the PTG presentation can be found here: > https://drive.google.com/file/d/10D-h9uJ456fphluc478KZUJZew0XiiO-/view?usp=sharing > > Let me know if I  can provide additional details. > > Thanks, > Jimmy > >> Michael Johnson >> September 21, 2018 at 7:41 PM >> Jimmy, >> >> Yes, the tags are correct in >> https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml >> but are not correct in Project Navigator. >> >> I am asking where is the git repository with the Project Navigator >> code that creates the "Project Details" section? >> I am looking for the Project Navigator source code. >> >> Michael >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy McArthur >> September 21, 2018 at 6:21 PM >> The TC tags are indeed in a different repo: >> https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml >> >> Let me know if this makes sense. >> >> Jimmy >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Michael Johnson >> September 21, 2018 at 4:32 PM >> Matt, >> >> I'm a bit confused by your response. I'm not looking for a definition >> of the tags, that is very clear. >> >> I'm looking for the source code backing the page that is rendering >> which tags a project has. >> This code appears to be broken and not rendering the tags correctly >> and I wanted to see if I could fix it. >> >> Michael >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Matt Riedemann >> September 21, 2018 at 3:04 PM >> >> >> Those are down in the project details section of the page, look to >> the right and there is a 'tag details' column. The tags are >> descriptive and link to the details on each tag. >> >> Michael Johnson >> September 21, 2018 at 1:11 PM >> Thank you Jimmy for making this available for updates. >> >> I was unable to find the code backing the project tags section of the >> Project Navigator pages. >> Our page is missing some upgrade tags and is showing duplicate "Stable >> branch policy" tags. >> >> https://www.openstack.org/software/releases/rocky/components/octavia >> >> Is there a different repository for the tags code? >> >> Thanks, >> Michael >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Sun Sep 23 03:49:25 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sat, 22 Sep 2018 22:49:25 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-29 Update In-Reply-To: References: <2ce541ea-7f2c-6d12-2831-3a658e69e52e@gmail.com> Message-ID: <20180923034924.GA12490@sm-workstation> On Fri, Sep 21, 2018 at 04:19:35PM -0500, Ben Nemec wrote: > > > On 09/21/2018 03:53 PM, Matt Riedemann wrote: > > Updates for this week: > > > > * As bnemec noted in the last update [1], he's making some progress with > > the oslo.upgradecheck library. He's retrofitting the nova-status upgrade > > check code to use the library and has a patch up for designate to use > > it. > > > > * The only two projects that I'm aware of with patches up at this point > > are monasca [2] and designate [3]. The monasca one is tricky because as > > I've found going through release notes for some projects, they don't > > really have any major upgrade impacts so writing checks is not obvious. > > I don't have a great solution here. What monasca has done is add the > > framework with a noop check. If others are in the same situation, I'd > > like to hear your thoughts on what you think makes sense here. The > > alternative is these projects opt out of the goal for Stein and just add > > the check code later when it makes sense (but people might forget or not > > care to do that later if it's not a goal). > > My inclination is for the command to exist with a noop check, the main > reason being that if we create it for everyone this cycle then the > deployment tools can implement calls to the status commands all at once. If > we wait until checks are needed then someone has to not only implement it in > the service but also remember to go update all of the deployment tools. > Implementing a noop check should be pretty trivial with the library so it > isn't a huge imposition. > This was brought up at one point, and I think the preference for those involved at the time was to still have the upgrade check available, even if it is just a noop. The reason being as you state that it makes things consistent for deployment tooling to be able to always run the check, regardless which project is being done. Sean From ildiko.vancsa at gmail.com Sun Sep 23 12:29:03 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sun, 23 Sep 2018 14:29:03 +0200 Subject: [openstack-dev] [os-upstream-institute] Find a slot for a meeting to discuss - ACTION NEEDED Message-ID: <313CAE1B-CCBB-426F-976B-0320B2273BA1@gmail.com> Hi Training Team, With the new workshop style training format that is utilizing the Contributor Guide we have less work with the training material side and we have less items to discuss in the form of regular meetings. However, we have a few items to go through before the upcoming training in Berlin to make sure we are fully prepared. We also have Florian from City Network working on the online version of the training that he would like to discuss with the team. As the current meeting slot is very inconvenient in Europe and Asia as well I put together a Doodle poll to try to find a better slot if we can. As we have people all around the globe all the slots are inconvenient to a subset of the team, but if we can agree on one for one meeting it would be great. Please vote on the poll as soon as possible: https://doodle.com/poll/yrp9anbb7weaun4h When we have the full list of mentors for the Berlin training I will send out a separate poll for one or two prep calls. If you’re available for the training and not signed up on the wiki yet please sign up: https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute_Occasions#Berlin_Crew Please let me know if you have any questions. Thanks and Best Regards, Ildikó From doug at doughellmann.com Sun Sep 23 21:49:47 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 23 Sep 2018 17:49:47 -0400 Subject: [openstack-dev] [Release-job-failures] Release of openstack/group-based-policy failed In-Reply-To: References: Message-ID: zuul at openstack.org writes: > Build failed. > > - release-openstack-python http://logs.openstack.org/7a/7abbf9af380eb236908f4d41d56e4cb1a0c2f135/release/release-openstack-python/ac1d5ab/ : FAILURE in 8m 44s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures The release-openstack-python job keeps failing with an SSL module error. It is currently running tox under python 2, and SSL support there has been steadily getting worse over time. I suggest trying to fix the release job by either updating the tox environment in the repo to use python 3, or switching to the new publish-to-pypi-python3 template which runs a job that uses python 3. See [1] for an example. Doug [1] https://review.openstack.org/#/c/598323 From e0ne at e0ne.info Mon Sep 24 08:18:42 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 24 Sep 2018 11:18:42 +0300 Subject: [openstack-dev] [horizon] Horizon gates are broken Message-ID: Hi team, Unfortunately, horizon gates are broken now. We can't merge any patch due to the -1 from CI. I don't want to disable tests now, that's why I proposed a fix [1]. We'd got released some of XStatic-* packages last week. At least new XStatic-jQuery [2] breaks horizon [3]. I'm working on a new job for requirements repo [4] to prevent such issues in the future. Please, do not try 'recheck' until [1] will be merged. [1] https://review.openstack.org/#/c/604611/ [2] https://pypi.org/project/XStatic-jQuery/#history [3] https://bugs.launchpad.net/horizon/+bug/1794028 [4] https://review.openstack.org/#/c/604613/ Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Mon Sep 24 09:10:14 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 24 Sep 2018 11:10:14 +0200 Subject: [openstack-dev] [goals][python3][nova] starting zuul migration for nova repos In-Reply-To: References: <1536608885-sup-3596@lrrr.local> Message-ID: On 11/09/2018 19.13, Stephen Finucane wrote: > On Mon, 2018-09-10 at 13:48 -0600, Doug Hellmann wrote: >> Melanie gave me the go-ahead to propose the patches, so here's the list >> of patches for the zuul migration, doc job update, and python 3.6 unit >> tests for the nova repositories. > > I've reviewed/+2d all of these on master and think Sylvain will be > following up with the +Ws. I need someone else to handle the > 'stable/XXX' patches though. > > Here's a query for anyone that wants to jump in here. > > https://review.openstack.org/#/q/topic:python3-first+status:open+(openstack/nova+OR+project:openstack/nova-specs+OR+openstack/os-traits+OR+openstack/os-vif+OR+openstack/osc-placement+OR+openstack/python-novaclient) Most of these are merged - with exception of stable changes and changes to osc-placement. Any nova stable reviewers around to finish this, please? Thanks, Andreas > > Stephen > > PS: Thanks, Andreas, for the follow-up cleanup patches. Much > appreciated :) > >> +----------------------------------------------+--------------------------------+---------------+ >>> Subject | Repo | Branch | >> >> +----------------------------------------------+--------------------------------+---------------+ >>> remove job settings for nova repositories | openstack-infra/project-config | master | >>> import zuul job settings from project-config | openstack/nova | master | >>> switch documentation job to new PTI | openstack/nova | master | >>> add python 3.6 unit test job | openstack/nova | master | >>> import zuul job settings from project-config | openstack/nova | stable/ocata | >>> import zuul job settings from project-config | openstack/nova | stable/pike | >>> import zuul job settings from project-config | openstack/nova | stable/queens | >>> import zuul job settings from project-config | openstack/nova | stable/rocky | >>> import zuul job settings from project-config | openstack/nova-specs | master | >>> import zuul job settings from project-config | openstack/os-traits | master | >>> switch documentation job to new PTI | openstack/os-traits | master | >>> add python 3.6 unit test job | openstack/os-traits | master | >>> import zuul job settings from project-config | openstack/os-traits | stable/pike | >>> import zuul job settings from project-config | openstack/os-traits | stable/queens | >>> import zuul job settings from project-config | openstack/os-traits | stable/rocky | >>> import zuul job settings from project-config | openstack/os-vif | master | >>> switch documentation job to new PTI | openstack/os-vif | master | >>> add python 3.6 unit test job | openstack/os-vif | master | >>> import zuul job settings from project-config | openstack/os-vif | stable/ocata | >>> import zuul job settings from project-config | openstack/os-vif | stable/pike | >>> import zuul job settings from project-config | openstack/os-vif | stable/queens | >>> import zuul job settings from project-config | openstack/os-vif | stable/rocky | >>> import zuul job settings from project-config | openstack/osc-placement | master | >>> switch documentation job to new PTI | openstack/osc-placement | master | >>> add python 3.6 unit test job | openstack/osc-placement | master | >>> import zuul job settings from project-config | openstack/osc-placement | stable/queens | >>> import zuul job settings from project-config | openstack/osc-placement | stable/rocky | >>> import zuul job settings from project-config | openstack/python-novaclient | master | >>> switch documentation job to new PTI | openstack/python-novaclient | master | >>> add python 3.6 unit test job | openstack/python-novaclient | master | >>> add lib-forward-testing-python3 test job | openstack/python-novaclient | master | >>> import zuul job settings from project-config | openstack/python-novaclient | stable/ocata | >>> import zuul job settings from project-config | openstack/python-novaclient | stable/pike | >>> import zuul job settings from project-config | openstack/python-novaclient | stable/queens | >>> import zuul job settings from project-config | openstack/python-novaclient | stable/rocky | >> >> +----------------------------------------------+--------------------------------+---------------+ >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From balazs.gibizer at ericsson.com Mon Sep 24 09:31:34 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Mon, 24 Sep 2018 09:31:34 +0000 Subject: [openstack-dev] [goals][python3][nova] starting zuul migration for nova repos In-Reply-To: References: <1536608885-sup-3596@lrrr.local> Message-ID: <1537781490.28718.5@smtp.office365.com> On Mon, Sep 24, 2018 at 11:10 AM, Andreas Jaeger wrote: > On 11/09/2018 19.13, Stephen Finucane wrote: >> On Mon, 2018-09-10 at 13:48 -0600, Doug Hellmann wrote: >>> Melanie gave me the go-ahead to propose the patches, so here's the >>> list >>> of patches for the zuul migration, doc job update, and python 3.6 >>> unit >>> tests for the nova repositories. >> >> I've reviewed/+2d all of these on master and think Sylvain will be >> following up with the +Ws. I need someone else to handle the >> 'stable/XXX' patches though. >> >> Here's a query for anyone that wants to jump in here. >> >> https://review.openstack.org/#/q/topic:python3-first+status:open+(openstack/nova+OR+project:openstack/nova-specs+OR+openstack/os-traits+OR+openstack/os-vif+OR+openstack/osc-placement+OR+openstack/python-novaclient) > > Most of these are merged - with exception of stable changes and > changes to osc-placement. Any nova stable reviewers around to finish > this, please? I've +A-d the osc-placement patches but I cannot do the same for the stale patches. Cheers, gibi > > Thanks, > Andreas > > >> >> Stephen >> >> PS: Thanks, Andreas, for the follow-up cleanup patches. Much >> appreciated :) >> >>> +----------------------------------------------+--------------------------------+---------------+ >>>> Subject | Repo >>>> | Branch | >>> >>> +----------------------------------------------+--------------------------------+---------------+ >>>> remove job settings for nova repositories | >>>> openstack-infra/project-config | master | >>>> import zuul job settings from project-config | openstack/nova >>>> | master | >>>> switch documentation job to new PTI | openstack/nova >>>> | master | >>>> add python 3.6 unit test job | openstack/nova >>>> | master | >>>> import zuul job settings from project-config | openstack/nova >>>> | stable/ocata | >>>> import zuul job settings from project-config | openstack/nova >>>> | stable/pike | >>>> import zuul job settings from project-config | openstack/nova >>>> | stable/queens | >>>> import zuul job settings from project-config | openstack/nova >>>> | stable/rocky | >>>> import zuul job settings from project-config | >>>> openstack/nova-specs | master | >>>> import zuul job settings from project-config | openstack/os-traits >>>> | master | >>>> switch documentation job to new PTI | openstack/os-traits >>>> | master | >>>> add python 3.6 unit test job | openstack/os-traits >>>> | master | >>>> import zuul job settings from project-config | openstack/os-traits >>>> | stable/pike | >>>> import zuul job settings from project-config | openstack/os-traits >>>> | stable/queens | >>>> import zuul job settings from project-config | openstack/os-traits >>>> | stable/rocky | >>>> import zuul job settings from project-config | openstack/os-vif >>>> | master | >>>> switch documentation job to new PTI | openstack/os-vif >>>> | master | >>>> add python 3.6 unit test job | openstack/os-vif >>>> | master | >>>> import zuul job settings from project-config | openstack/os-vif >>>> | stable/ocata | >>>> import zuul job settings from project-config | openstack/os-vif >>>> | stable/pike | >>>> import zuul job settings from project-config | openstack/os-vif >>>> | stable/queens | >>>> import zuul job settings from project-config | openstack/os-vif >>>> | stable/rocky | >>>> import zuul job settings from project-config | >>>> openstack/osc-placement | master | >>>> switch documentation job to new PTI | >>>> openstack/osc-placement | master | >>>> add python 3.6 unit test job | >>>> openstack/osc-placement | master | >>>> import zuul job settings from project-config | >>>> openstack/osc-placement | stable/queens | >>>> import zuul job settings from project-config | >>>> openstack/osc-placement | stable/rocky | >>>> import zuul job settings from project-config | >>>> openstack/python-novaclient | master | >>>> switch documentation job to new PTI | >>>> openstack/python-novaclient | master | >>>> add python 3.6 unit test job | >>>> openstack/python-novaclient | master | >>>> add lib-forward-testing-python3 test job | >>>> openstack/python-novaclient | master | >>>> import zuul job settings from project-config | >>>> openstack/python-novaclient | stable/ocata | >>>> import zuul job settings from project-config | >>>> openstack/python-novaclient | stable/pike | >>>> import zuul job settings from project-config | >>>> openstack/python-novaclient | stable/queens | >>>> import zuul job settings from project-config | >>>> openstack/python-novaclient | stable/rocky | >>> >>> +----------------------------------------------+--------------------------------+---------------+ >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 > A126 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sfinucan at redhat.com Mon Sep 24 10:08:09 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 24 Sep 2018 11:08:09 +0100 Subject: [openstack-dev] [ptg] Post-lunch presentations schedule In-Reply-To: <58faffcf366246529c39ea680776df66@R01UKEXCASM126.r01.fujitsu.local> References: <58faffcf366246529c39ea680776df66@R01UKEXCASM126.r01.fujitsu.local> Message-ID: On Tue, 2018-09-18 at 12:42 +0000, Bedyk, Witold wrote: > Stephen, > > could you please share your presentation slides? > > Thanks > Witek Yup. They're available here: https://docs.google.com/presentation/d/1gaRFmUEJEmy-8lCFsxauQLZNB2ch4hPQ0L4yO11vgL0/edit Stephen > > > -----Original Message----- > > From: Thierry Carrez > > Sent: Freitag, 24. August 2018 11:21 > > To: OpenStack Development Mailing List > dev at lists.openstack.org> > > Subject: [openstack-dev] [ptg] Post-lunch presentations schedule > > > > > Friday: Lightning talks > > Fast-paced 5-min segments to talk about anything... Summaries of > > team > > plans for Stein encouraged. A presentation of Sphinx in OpenStack > > by > > stephenfin will open the show. > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From derekh at redhat.com Mon Sep 24 11:17:02 2018 From: derekh at redhat.com (Derek Higgins) Date: Mon, 24 Sep 2018 12:17:02 +0100 Subject: [openstack-dev] [ironic] status of the zuulv3 job migration In-Reply-To: References: Message-ID: On Fri, 21 Sep 2018 at 11:16, Derek Higgins wrote: > Just quick summary of the status and looking for some input about the > experimental jobs > > 15 jobs are now done, with another 2 ready for reviewing. This leaves 6 > jobs > 1 x multinode job > I've yet to finished porting this one > 2 x grenade jobs > Last time I looked grenade jobs couldn't yet be ported to zuulv3 native > but I'll investigate further > > 3 x experimental jobs > (ironic-dsvm-functional, ironic-tempest-dsvm-parallel, ironic-tempest-dsvm-pxe_ipa-full) > These don't currently pass and it doesn't look like anybody is using > them, So I'd like to know if there is anybody out there interested in them, > if not I'll go ahead and remove them. > Job removal proposed here https://review.openstack.org/#/c/591675/ > > thanks, > Derek. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Mon Sep 24 12:00:27 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 24 Sep 2018 14:00:27 +0200 Subject: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs Message-ID: <1537790427.1265517.1518561608.5261E953@webmail.messagingengine.com> This is in regard to https://launchpad.net/bugs/1641625 and the proposed patch https://review.openstack.org/588211 for it. Thanks Vishakha for getting the ball rolling. tl;dr: Keystone as an IdP should support sending non-strings/lists-of-strings as user attribute values, specifically lists of keystone groups, here's how that might happen. Problem statement: When keystone is set up as a service provider with an external non-keystone identity provider, it is common to configure the mapping rules to accept a list of group names from the IdP and map them to some property of a local keystone user, usually also a keystone group name. When keystone acts as the IdP, it's not currently possible to send a group name as a user property in the assertion. There are a few problems: 1. We haven't added any openstack_groups key in the creation of the SAML assertion (http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164). 2. If we did, this would not be enough. Unlike other IdPs, in keystone there can be multiple groups with the same name, namespaced by domain. So it's not enough for the SAML AttributeStatement to contain a semi-colon-separated list of group names, since a user could theoretically be a member of two or more groups with the same name. * Why can't we just send group IDs, which are unique? Because two different keystones are not going to have independent groups with the same UUID, so we cannot possibly map an ID of a group from keystone A to the ID of a different group in keystone B. We could map the ID of the group in in A to the name of a group in B but then operators need to create groups with UUIDs as names which is a little awkward for both the operator and the user who now is a member of groups with nondescriptive names. 3. If we then were able to encode a complex type like a group dict in a SAML assertion, we'd have to deal with it on the service provider side by being able to parse such an environment variable from the Apache headers. 4. The current mapping rules engine uses basic python string formatting to translate remote key-value pairs to local rules. We would need to change the mapping API to work with values more complex than strings and lists of strings. Possible solution: Vishakha's patch (https://review.openstack.org/588211) starts to solve (1) but it doesn't go far enough to solve (2-4). What we talked about at the PTG was: 2. Encode the group+domain as a string, for example by using the dict string repr or a string representation of some custom XML and maybe base64 encoding it. * It's not totally clear whether the AttributeValue class of the pysaml2 library supports any data types outside of the xmlns:xs namespace or whether nested XML is an option, so encoding the whole thing as an xs:string seems like the simplest solution. 3. The SP will have to be aware that openstack_groups is a special key that needs the encoding reversed. * I wrote down "MultiDict" in my notes but I don't recall exactly what format the environment variable would take that would make a MultiDict make sense here, in any case I think encoding the whole thing as a string eliminates the need for this. 4. We didn't talk about the mapping API, but here's what I think. If we were just talking about group names, the mapping API today would work like this (slight oversimplification for brevity): Given a list of openstack_groups like ["A", "B", "C"], it would work like this: [ { "local": [ { "group": { "name": "{0}", "domain": { "name": "federated_domain" } } } ], "remote": [ { "type": "openstack_groups" } ] } ] (paste in case the spacing makes this unreadable: http://paste.openstack.org/show/730623/ ) But now, we no longer have a list of strings but something more like [{"name": "A", "domain_name": "Default"} {"name": "B", "domain_name": "Default", "name": "A", "domain_name": "domainB"}]. Since {0} isn't a string, this example doesn't really work. Instead, let's assume that in step (3) we converted the decoded AttributeValue text to an object. Then the mapping could look more like this: [ { "local": [ { "group": { "name": "{0.name}", "domain": { "name": "{0.domain_name}" } } } ], "remote": [ { "type": "openstack_groups" } ] } ] (paste: http://paste.openstack.org/show/730622/ ) Alternatively, we could forget about the namespacing problem and simply say we only pass group names in the assertion, and if you have ambiguous group names you're on your own. We could also try to support both, e.g. have an openstack_groups mean a list of group names for simpler use cases, and openstack_groups_unique mean the list of encoded group+domain strings for advanced use cases. Finally, whatever we decide for groups we should also apply to openstack_roles which currently only supports global roles and not domain-specific roles. (It's also worth noting, for clarity, that the samlize function does handle namespaced projects, but this is because it's retrieving the project from the token and therefore there is only ever one project and one project domain so there is no ambiguity.) Thoughts? - Colleen (cmurphy) From kchamart at redhat.com Mon Sep 24 13:22:50 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 24 Sep 2018 15:22:50 +0200 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for 'T' release Message-ID: <20180924132250.GW28120@paraplu> Hey folks, Before we bump the agreed upon[1] minimum versions for libvirt and QEMU for 'Stein', we need to do the tedious work of picking the NEXT_MIN_* versions for the 'T' (which is still in the naming phase) release, which will come out in the autumn (Sep-Nov) of 2019. Proposal -------- Looking at the DistroSupportMatrix[2], it seems like we can pick the libvirt and QEMU versions supported by the next LTS release of Ubuntu -- 18.04; "Bionic", which are: libvirt: 4.0.0 QEMU: 2.11 Debian, Fedora, Ubuntu (Bionic), openSUSE currently already ship the above versions. And it seems reasonable to assume that the enterprise distribtions will also ship the said versions pretty soon; but let's double-confirm below. Considerations and open questions --------------------------------- (a) KVM for IBM z Systems: John Garbutt pointed out[3] on IRC that: "IBM announced that KVM for IBM z will be withdrawn, effective March 31, 2018 [...] development will not only continue unaffected, but the options for users grow, especially with the recent addition of SuSE to the existing support in Ubuntu." The message seems to be: "use a regular distribution". So this is covered, if we a version based on other distributions. (b) Oracle Linux: Can you please confirm if you'll be able to release libvirt and QEMU to 4.0.0 and 2.11, respectively? (c) SLES: Same question as above. Assuming Oracle Linux and SLES confirm, please let us know if there are any objections if we pick NEXT_MIN_* versions for the OpenStack 'T' release to be libvirt: 4.0.0 and QEMU: 2.11. * * * A refresher on libvirt and QEMU release schedules ------------------------------------------------- - There will be at least 12 libvirt releases (_excluding_ maintenance releases) by Autumn 2019. A new libvirt release comes out every month[4]. - And there will be about 4 releases of QEMU. A new QEMU release comes out once every four months. [1] http://git.openstack.org/cgit/openstack/nova/commit/?h=master&id=28d337b -- Pick next minimum libvirt / QEMU versions for "Stein" [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix [3] http://kvmonz.blogspot.com/2017/03/kvm-for-ibm-z-withdrawal.html [4] https://libvirt.org/downloads.html#schedule -- /kashyap From bodenvmw at gmail.com Mon Sep 24 13:57:14 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Mon, 24 Sep 2018 07:57:14 -0600 Subject: [openstack-dev] [neutron] Bug deputy report week of Sept 17 Message-ID: <359e1d29-063d-41bc-18f9-376601bf3c44@gmail.com> Below is a summary of last weeks bug activity. I've tried to organize the summary to highlight those bugs that still need attention. Thanks Needs additional technical triage: - [dvr][ha][dataplane down] router_gateway port binding host goes wrong after the 'master' host down/up https://bugs.launchpad.net/neutron/+bug/1793529 - q-dhcp crashes with guru meditation on ironic's grenade https://bugs.launchpad.net/neutron/+bug/1792925 - [RFE] Enable other projects to extend l2pop fdb information https://bugs.launchpad.net/neutron/+bug/1793653 Under discussion: - Egress UDP traffic is dropped https://bugs.launchpad.net/neutron/+bug/1793244 - ha_vrrp_health_check_interval causes constantly VRRP transitions https://bugs.launchpad.net/neutron/+bug/1793102 - subnet pool can not delete prefixes https://bugs.launchpad.net/neutron/+bug/1792901 - external_gateway_info enable_snat attribute should be owner-modifiable https://bugs.launchpad.net/neutron/+bug/1793207 Triaged, but no assignee: - DB deadlock when delete subnet https://bugs.launchpad.net/neutron/+bug/1793516 - public-subnet not explained https://bugs.launchpad.net/neutron/+bug/1793103 - adding 0.0.0.0/0 address pair to a port bypasses all other vm security groups https://bugs.launchpad.net/neutron/+bug/1793029 - The user can delete a security group which is used as remote-group-id https://bugs.launchpad.net/neutron/+bug/1792890 In progress: - [dvr_no_external][ha][dataplane down]centralized floating IP nat rules not install in every HA node https://bugs.launchpad.net/neutron/+bug/1793527 - Subnet update with the subnet's current segment_id fail with: NoUpdateSubnetWhenMultipleSegmentsOnNetwork https://bugs.launchpad.net/neutron/+bug/1793391 - Changing of_interface between native and ovs-ofctl causes packet drops https://bugs.launchpad.net/neutron/+bug/1793354 - Router: add port doesn't take IP from allocation pool https://bugs.launchpad.net/neutron/+bug/1793094 From lbragstad at gmail.com Mon Sep 24 14:16:59 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 24 Sep 2018 09:16:59 -0500 Subject: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs In-Reply-To: <1537790427.1265517.1518561608.5261E953@webmail.messagingengine.com> References: <1537790427.1265517.1518561608.5261E953@webmail.messagingengine.com> Message-ID: On Mon, Sep 24, 2018 at 7:00 AM Colleen Murphy wrote: > This is in regard to https://launchpad.net/bugs/1641625 and the proposed > patch https://review.openstack.org/588211 for it. Thanks Vishakha for > getting the ball rolling. > > tl;dr: Keystone as an IdP should support sending > non-strings/lists-of-strings as user attribute values, specifically lists > of keystone groups, here's how that might happen. > > Problem statement: > > When keystone is set up as a service provider with an external > non-keystone identity provider, it is common to configure the mapping rules > to accept a list of group names from the IdP and map them to some property > of a local keystone user, usually also a keystone group name. When keystone > acts as the IdP, it's not currently possible to send a group name as a user > property in the assertion. There are a few problems: > > 1. We haven't added any openstack_groups key in the creation of the > SAML assertion ( > http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164 > ). > 2. If we did, this would not be enough. Unlike other IdPs, in keystone > there can be multiple groups with the same name, namespaced by domain. So > it's not enough for the SAML AttributeStatement to contain a > semi-colon-separated list of group names, since a user could theoretically > be a member of two or more groups with the same name. > * Why can't we just send group IDs, which are unique? Because two > different keystones are not going to have independent groups with the same > UUID, so we cannot possibly map an ID of a group from keystone A to the ID > of a different group in keystone B. We could map the ID of the group in in > A to the name of a group in B but then operators need to create groups with > UUIDs as names which is a little awkward for both the operator and the user > who now is a member of groups with nondescriptive names. > 3. If we then were able to encode a complex type like a group dict in > a SAML assertion, we'd have to deal with it on the service provider side by > being able to parse such an environment variable from the Apache headers. > 4. The current mapping rules engine uses basic python string > formatting to translate remote key-value pairs to local rules. We would > need to change the mapping API to work with values more complex than > strings and lists of strings. > > Possible solution: > > Vishakha's patch (https://review.openstack.org/588211) starts to solve > (1) but it doesn't go far enough to solve (2-4). What we talked about at > the PTG was: > > 2. Encode the group+domain as a string, for example by using the dict > string repr or a string representation of some custom XML and maybe base64 > encoding it. > * It's not totally clear whether the AttributeValue class of the > pysaml2 library supports any data types outside of the xmlns:xs namespace > or whether nested XML is an option, so encoding the whole thing as an > xs:string seems like the simplest solution. > Encoding this makes sense. We can formally support different SAML data types in the future if a better solution comes along. We would have to make the service provider deal with both types of encoding, but we could eventually consolidate, and users shouldn't know the difference. Right? > 3. The SP will have to be aware that openstack_groups is a special key > that needs the encoding reversed. > * I wrote down "MultiDict" in my notes but I don't recall exactly > what format the environment variable would take that would make a MultiDict > make sense here, in any case I think encoding the whole thing as a string > eliminates the need for this. > 4. We didn't talk about the mapping API, but here's what I think. If > we were just talking about group names, the mapping API today would work > like this (slight oversimplification for brevity): > > Given a list of openstack_groups like ["A", "B", "C"], it would work like > this: > > [ > { > "local": > [ > { > "group": > { > "name": "{0}", > "domain": > { > "name": "federated_domain" > } > } > } > ], "remote": > [ > { > "type": "openstack_groups" > } > ] > } > ] > (paste in case the spacing makes this unreadable: > http://paste.openstack.org/show/730623/ ) > > But now, we no longer have a list of strings but something more like > [{"name": "A", "domain_name": "Default"} {"name": "B", "domain_name": > "Default", "name": "A", "domain_name": "domainB"}]. Since {0} isn't a > string, this example doesn't really work. Instead, let's assume that in > step (3) we converted the decoded AttributeValue text to an object. Then > the mapping could look more like this: > > [ > { > "local": > [ > { > "group": > { > "name": "{0.name}", > "domain": > { > "name": "{0.domain_name}" > } > } > } > ], "remote": > [ > { > "type": "openstack_groups" > } > ] > } > ] > (paste: http://paste.openstack.org/show/730622/ ) > > I can't come up with a reason not to do this at the moment. If we serialize the group+domain name information in SAML, then it seems appropriate to teach the service provider how to deserialize it and apply it to mappings. Otherwise, does it make sense to build serialization into the identity provider if we aren't going to use the domain name? > Alternatively, we could forget about the namespacing problem and simply > say we only pass group names in the assertion, and if you have ambiguous > group names you're on your own. We could also try to support both, e.g. > have an openstack_groups mean a list of group names for simpler use cases, > and openstack_groups_unique mean the list of encoded group+domain strings > for advanced use cases. > > Finally, whatever we decide for groups we should also apply to > openstack_roles which currently only supports global roles and not > domain-specific roles. > > (It's also worth noting, for clarity, that the samlize function does > handle namespaced projects, but this is because it's retrieving the project > from the token and therefore there is only ever one project and one project > domain so there is no ambiguity.) > > Thoughts? > > - Colleen (cmurphy) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Mon Sep 24 14:31:27 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 24 Sep 2018 16:31:27 +0200 Subject: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs In-Reply-To: References: <1537790427.1265517.1518561608.5261E953@webmail.messagingengine.com> Message-ID: <1537799487.1314295.1518727168.5B68A73B@webmail.messagingengine.com> On Mon, Sep 24, 2018, at 4:16 PM, Lance Bragstad wrote: > On Mon, Sep 24, 2018 at 7:00 AM Colleen Murphy wrote: > > > This is in regard to https://launchpad.net/bugs/1641625 and the proposed > > patch https://review.openstack.org/588211 for it. Thanks Vishakha for > > getting the ball rolling. > > > > tl;dr: Keystone as an IdP should support sending > > non-strings/lists-of-strings as user attribute values, specifically lists > > of keystone groups, here's how that might happen. > > > > Problem statement: > > > > When keystone is set up as a service provider with an external > > non-keystone identity provider, it is common to configure the mapping rules > > to accept a list of group names from the IdP and map them to some property > > of a local keystone user, usually also a keystone group name. When keystone > > acts as the IdP, it's not currently possible to send a group name as a user > > property in the assertion. There are a few problems: > > > > 1. We haven't added any openstack_groups key in the creation of the > > SAML assertion ( > > http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164 > > ). > > 2. If we did, this would not be enough. Unlike other IdPs, in keystone > > there can be multiple groups with the same name, namespaced by domain. So > > it's not enough for the SAML AttributeStatement to contain a > > semi-colon-separated list of group names, since a user could theoretically > > be a member of two or more groups with the same name. > > * Why can't we just send group IDs, which are unique? Because two > > different keystones are not going to have independent groups with the same > > UUID, so we cannot possibly map an ID of a group from keystone A to the ID > > of a different group in keystone B. We could map the ID of the group in in > > A to the name of a group in B but then operators need to create groups with > > UUIDs as names which is a little awkward for both the operator and the user > > who now is a member of groups with nondescriptive names. > > 3. If we then were able to encode a complex type like a group dict in > > a SAML assertion, we'd have to deal with it on the service provider side by > > being able to parse such an environment variable from the Apache headers. > > 4. The current mapping rules engine uses basic python string > > formatting to translate remote key-value pairs to local rules. We would > > need to change the mapping API to work with values more complex than > > strings and lists of strings. > > > > Possible solution: > > > > Vishakha's patch (https://review.openstack.org/588211) starts to solve > > (1) but it doesn't go far enough to solve (2-4). What we talked about at > > the PTG was: > > > > 2. Encode the group+domain as a string, for example by using the dict > > string repr or a string representation of some custom XML and maybe base64 > > encoding it. > > * It's not totally clear whether the AttributeValue class of the > > pysaml2 library supports any data types outside of the xmlns:xs namespace > > or whether nested XML is an option, so encoding the whole thing as an > > xs:string seems like the simplest solution. > > > > Encoding this makes sense. We can formally support different SAML data > types in the future if a better solution comes along. We would have to make > the service provider deal with both types of encoding, but we could > eventually consolidate, and users shouldn't know the difference. Right? The only way this would make a difference to the user is if they need to debug a request by actually looking at the response to this request[1]. If we were to base64-encode the string that immediately obfuscates what the actual value is. I'm not really sure if we need to base64-encode it or just serialize it some other way. [1] https://developer.openstack.org/api-ref/identity/v3-ext/index.html#id404 > > > > 3. The SP will have to be aware that openstack_groups is a special key > > that needs the encoding reversed. > > * I wrote down "MultiDict" in my notes but I don't recall exactly > > what format the environment variable would take that would make a MultiDict > > make sense here, in any case I think encoding the whole thing as a string > > eliminates the need for this. > > 4. We didn't talk about the mapping API, but here's what I think. If > > we were just talking about group names, the mapping API today would work > > like this (slight oversimplification for brevity): > > > > Given a list of openstack_groups like ["A", "B", "C"], it would work like > > this: > > > > [ > > { > > "local": > > [ > > { > > "group": > > { > > "name": "{0}", > > "domain": > > { > > "name": "federated_domain" > > } > > } > > } > > ], "remote": > > [ > > { > > "type": "openstack_groups" > > } > > ] > > } > > ] > > (paste in case the spacing makes this unreadable: > > http://paste.openstack.org/show/730623/ ) > > > > But now, we no longer have a list of strings but something more like > > [{"name": "A", "domain_name": "Default"} {"name": "B", "domain_name": > > "Default", "name": "A", "domain_name": "domainB"}]. Since {0} isn't a > > string, this example doesn't really work. Instead, let's assume that in > > step (3) we converted the decoded AttributeValue text to an object. Then > > the mapping could look more like this: > > > > [ > > { > > "local": > > [ > > { > > "group": > > { > > "name": "{0.name}", > > "domain": > > { > > "name": "{0.domain_name}" > > } > > } > > } > > ], "remote": > > [ > > { > > "type": "openstack_groups" > > } > > ] > > } > > ] > > (paste: http://paste.openstack.org/show/730622/ ) > > > > > I can't come up with a reason not to do this at the moment. If we serialize > the group+domain name information in SAML, then it seems appropriate to > teach the service provider how to deserialize it and apply it to mappings. > Otherwise, does it make sense to build serialization into the identity > provider if we aren't going to use the domain name? > > > > Alternatively, we could forget about the namespacing problem and simply > > say we only pass group names in the assertion, and if you have ambiguous > > group names you're on your own. We could also try to support both, e.g. > > have an openstack_groups mean a list of group names for simpler use cases, > > and openstack_groups_unique mean the list of encoded group+domain strings > > for advanced use cases. > > > > Finally, whatever we decide for groups we should also apply to > > openstack_roles which currently only supports global roles and not > > domain-specific roles. > > > > (It's also worth noting, for clarity, that the samlize function does > > handle namespaced projects, but this is because it's retrieving the project > > from the token and therefore there is only ever one project and one project > > domain so there is no ambiguity.) > > > > Thoughts? > > > > - Colleen (cmurphy) > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lbragstad at gmail.com Mon Sep 24 14:35:59 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 24 Sep 2018 09:35:59 -0500 Subject: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs In-Reply-To: <1537799487.1314295.1518727168.5B68A73B@webmail.messagingengine.com> References: <1537790427.1265517.1518561608.5261E953@webmail.messagingengine.com> <1537799487.1314295.1518727168.5B68A73B@webmail.messagingengine.com> Message-ID: On Mon, Sep 24, 2018 at 9:31 AM Colleen Murphy wrote: > On Mon, Sep 24, 2018, at 4:16 PM, Lance Bragstad wrote: > > On Mon, Sep 24, 2018 at 7:00 AM Colleen Murphy > wrote: > > > > > This is in regard to https://launchpad.net/bugs/1641625 and the > proposed > > > patch https://review.openstack.org/588211 for it. Thanks Vishakha for > > > getting the ball rolling. > > > > > > tl;dr: Keystone as an IdP should support sending > > > non-strings/lists-of-strings as user attribute values, specifically > lists > > > of keystone groups, here's how that might happen. > > > > > > Problem statement: > > > > > > When keystone is set up as a service provider with an external > > > non-keystone identity provider, it is common to configure the mapping > rules > > > to accept a list of group names from the IdP and map them to some > property > > > of a local keystone user, usually also a keystone group name. When > keystone > > > acts as the IdP, it's not currently possible to send a group name as a > user > > > property in the assertion. There are a few problems: > > > > > > 1. We haven't added any openstack_groups key in the creation of the > > > SAML assertion ( > > > > http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164 > > > ). > > > 2. If we did, this would not be enough. Unlike other IdPs, in > keystone > > > there can be multiple groups with the same name, namespaced by domain. > So > > > it's not enough for the SAML AttributeStatement to contain a > > > semi-colon-separated list of group names, since a user could > theoretically > > > be a member of two or more groups with the same name. > > > * Why can't we just send group IDs, which are unique? Because two > > > different keystones are not going to have independent groups with the > same > > > UUID, so we cannot possibly map an ID of a group from keystone A to > the ID > > > of a different group in keystone B. We could map the ID of the group > in in > > > A to the name of a group in B but then operators need to create groups > with > > > UUIDs as names which is a little awkward for both the operator and the > user > > > who now is a member of groups with nondescriptive names. > > > 3. If we then were able to encode a complex type like a group dict > in > > > a SAML assertion, we'd have to deal with it on the service provider > side by > > > being able to parse such an environment variable from the Apache > headers. > > > 4. The current mapping rules engine uses basic python string > > > formatting to translate remote key-value pairs to local rules. We would > > > need to change the mapping API to work with values more complex than > > > strings and lists of strings. > > > > > > Possible solution: > > > > > > Vishakha's patch (https://review.openstack.org/588211) starts to solve > > > (1) but it doesn't go far enough to solve (2-4). What we talked about > at > > > the PTG was: > > > > > > 2. Encode the group+domain as a string, for example by using the > dict > > > string repr or a string representation of some custom XML and maybe > base64 > > > encoding it. > > > * It's not totally clear whether the AttributeValue class of > the > > > pysaml2 library supports any data types outside of the xmlns:xs > namespace > > > or whether nested XML is an option, so encoding the whole thing as an > > > xs:string seems like the simplest solution. > > > > > > > Encoding this makes sense. We can formally support different SAML data > > types in the future if a better solution comes along. We would have to > make > > the service provider deal with both types of encoding, but we could > > eventually consolidate, and users shouldn't know the difference. Right? > > The only way this would make a difference to the user is if they need to > debug a request by actually looking at the response to this request[1]. If > we were to base64-encode the string that immediately obfuscates what the > actual value is. I'm not really sure if we need to base64-encode it or just > serialize it some other way. > Oh - yeah that makes sense. In your opinion, does that prevent us from adopting another way of solving the problem if we find a better data type? > > [1] > https://developer.openstack.org/api-ref/identity/v3-ext/index.html#id404 > > > > > > > 3. The SP will have to be aware that openstack_groups is a special > key > > > that needs the encoding reversed. > > > * I wrote down "MultiDict" in my notes but I don't recall > exactly > > > what format the environment variable would take that would make a > MultiDict > > > make sense here, in any case I think encoding the whole thing as a > string > > > eliminates the need for this. > > > 4. We didn't talk about the mapping API, but here's what I think. > If > > > we were just talking about group names, the mapping API today would > work > > > like this (slight oversimplification for brevity): > > > > > > Given a list of openstack_groups like ["A", "B", "C"], it would work > like > > > this: > > > > > > [ > > > { > > > "local": > > > [ > > > { > > > "group": > > > { > > > "name": "{0}", > > > "domain": > > > { > > > "name": "federated_domain" > > > } > > > } > > > } > > > ], "remote": > > > [ > > > { > > > "type": "openstack_groups" > > > } > > > ] > > > } > > > ] > > > (paste in case the spacing makes this unreadable: > > > http://paste.openstack.org/show/730623/ ) > > > > > > But now, we no longer have a list of strings but something more like > > > [{"name": "A", "domain_name": "Default"} {"name": "B", "domain_name": > > > "Default", "name": "A", "domain_name": "domainB"}]. Since {0} isn't a > > > string, this example doesn't really work. Instead, let's assume that in > > > step (3) we converted the decoded AttributeValue text to an object. > Then > > > the mapping could look more like this: > > > > > > [ > > > { > > > "local": > > > [ > > > { > > > "group": > > > { > > > "name": "{0.name}", > > > "domain": > > > { > > > "name": "{0.domain_name}" > > > } > > > } > > > } > > > ], "remote": > > > [ > > > { > > > "type": "openstack_groups" > > > } > > > ] > > > } > > > ] > > > (paste: http://paste.openstack.org/show/730622/ ) > > > > > > > > I can't come up with a reason not to do this at the moment. If we > serialize > > the group+domain name information in SAML, then it seems appropriate to > > teach the service provider how to deserialize it and apply it to > mappings. > > Otherwise, does it make sense to build serialization into the identity > > provider if we aren't going to use the domain name? > > > > > > > Alternatively, we could forget about the namespacing problem and simply > > > say we only pass group names in the assertion, and if you have > ambiguous > > > group names you're on your own. We could also try to support both, e.g. > > > have an openstack_groups mean a list of group names for simpler use > cases, > > > and openstack_groups_unique mean the list of encoded group+domain > strings > > > for advanced use cases. > > > > > > Finally, whatever we decide for groups we should also apply to > > > openstack_roles which currently only supports global roles and not > > > domain-specific roles. > > > > > > (It's also worth noting, for clarity, that the samlize function does > > > handle namespaced projects, but this is because it's retrieving the > project > > > from the token and therefore there is only ever one project and one > project > > > domain so there is no ambiguity.) > > > > > > Thoughts? > > > > > > - Colleen (cmurphy) > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Sep 24 15:03:37 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 24 Sep 2018 10:03:37 -0500 Subject: [openstack-dev] Forum Topic Submission Period In-Reply-To: <5B9FD2BB.3060806@openstack.org> References: <5B9FD2BB.3060806@openstack.org> Message-ID: Jimmy, Having a little trouble getting topics for Cinder.  Hoping to wrangle up more in our meeting on Wednesday.  Wanted to make sure that we could submit topics on Wednesday.  That is how I interpreted your note but wanted to be better safe than sorry. Thanks! Jay On 9/17/2018 11:13 AM, Jimmy McArthur wrote: > Hello Everyone! > > The Forum Topic Submission session started September 12 and will run > through September 26th. Now is the time to wrangle the topics you > gathered during your Brainstorming Phase and start pushing forum > topics through. Don't rely only on a PTL to make the agenda... step on > up and place the items you consider important front and center. > > As you may have noticed on the Forum Wiki > (https://wiki.openstack.org/wiki/Forum), we're reusing the normal CFP > tool this year. We did our best to remove Summit specific language, > but if you notice something, just know that you are submitting to the > Forum.  URL is here: > > https://www.openstack.org/summit/berlin-2018/call-for-presentations > > Looking forward to seeing everyone's submissions! > > If you have questions or concerns about the process, please don't > hesitate to reach out. > > Cheers, > Jimmy > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Sep 24 15:10:04 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 24 Sep 2018 10:10:04 -0500 Subject: [openstack-dev] Forum Topic Submission Period In-Reply-To: References: <5B9FD2BB.3060806@openstack.org> Message-ID: <5BA8FE4C.3060906@openstack.org> Yes - we are taking submissions through Wednesday :) > Jay S Bryant > September 24, 2018 at 10:03 AM > > Jimmy, > > Having a little trouble getting topics for Cinder. Hoping to wrangle > up more in our meeting on Wednesday. Wanted to make sure that we > could submit topics on Wednesday. That is how I interpreted your note > but wanted to be better safe than sorry. > > Thanks! > Jay > > > > On 9/17/2018 11:13 AM, Jimmy McArthur wrote: > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > September 17, 2018 at 11:13 AM > Hello Everyone! > > The Forum Topic Submission session started September 12 and will run > through September 26th. Now is the time to wrangle the topics you > gathered during your Brainstorming Phase and start pushing forum > topics through. Don't rely only on a PTL to make the agenda... step on > up and place the items you consider important front and center. > > As you may have noticed on the Forum Wiki > (https://wiki.openstack.org/wiki/Forum), we're reusing the normal CFP > tool this year. We did our best to remove Summit specific language, > but if you notice something, just know that you are submitting to the > Forum. URL is here: > > https://www.openstack.org/summit/berlin-2018/call-for-presentations > > Looking forward to seeing everyone's submissions! > > If you have questions or concerns about the process, please don't > hesitate to reach out. > > Cheers, > Jimmy > > _______________________________________________ > Openstack-track-chairs mailing list > Openstack-track-chairs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-track-chairs -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongle.li at gmail.com Mon Sep 24 15:10:08 2018 From: yongle.li at gmail.com (Fred Li) Date: Mon, 24 Sep 2018 23:10:08 +0800 Subject: [openstack-dev] [penstack-dev]Discussion about the future of OpenStack in China Message-ID: Hi folks, Recently there are several blogs which discussed about the future of OpenStack. If I was not wrong, the first one is "OpenStack-8-year-itch"[1], and you can find its English version attached. Thanks to google translation. The second one is "5-years-my-opinion-on-OpenStack" [2] with English version attached as well. Please translate the 3 to 6 and read them if you are interested. I don't want to judge anything here. I just want to share as they are quite hot discussion and I think it is valuable for the whole community, not part of community to know. [1] https://mp.weixin.qq.com/s/GM5cMOl0q3hb_6_eEiixzA [2] https://mp.weixin.qq.com/s/qZkE4o_BHBPlbIjekjDRKw [3] https://mp.weixin.qq.com/s/svX4z3JM5ArQ57A1jFoyLw [4] https://mp.weixin.qq.com/s/Nyb0OxI2Z7LxDpofTTyWOg [5] https://mp.weixin.qq.com/s/5GV4i8kyedHSbCxCO1VBRw [6] https://mp.weixin.qq.com/s/yeBcMogumXKGQ0KyKrgbqA -- Regards Fred Li (李永乐) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2-5-years-my-opinion-on-OpenStack.pdf Type: application/pdf Size: 68507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 1-OpenStack-8-year-itch.pdf Type: application/pdf Size: 2065823 bytes Desc: not available URL: From jungleboyj at gmail.com Mon Sep 24 15:10:56 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 24 Sep 2018 10:10:56 -0500 Subject: [openstack-dev] Forum Topic Submission Period In-Reply-To: <5BA8FE4C.3060906@openstack.org> References: <5B9FD2BB.3060806@openstack.org> <5BA8FE4C.3060906@openstack.org> Message-ID: Thank you sir! Jay On 9/24/2018 10:10 AM, Jimmy McArthur wrote: > Yes - we are taking submissions through Wednesday :) > >> Jay S Bryant >> September 24, 2018 at 10:03 AM >> >> Jimmy, >> >> Having a little trouble getting topics for Cinder.  Hoping to wrangle >> up more in our meeting on Wednesday.  Wanted to make sure that we >> could submit topics on Wednesday.  That is how I interpreted your >> note but wanted to be better safe than sorry. >> >> Thanks! >> Jay >> >> >> >> On 9/17/2018 11:13 AM, Jimmy McArthur wrote: >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy McArthur >> September 17, 2018 at 11:13 AM >> Hello Everyone! >> >> The Forum Topic Submission session started September 12 and will run >> through September 26th. Now is the time to wrangle the topics you >> gathered during your Brainstorming Phase and start pushing forum >> topics through. Don't rely only on a PTL to make the agenda... step >> on up and place the items you consider important front and center. >> >> As you may have noticed on the Forum Wiki >> (https://wiki.openstack.org/wiki/Forum), we're reusing the normal CFP >> tool this year. We did our best to remove Summit specific language, >> but if you notice something, just know that you are submitting to the >> Forum. URL is here: >> >> https://www.openstack.org/summit/berlin-2018/call-for-presentations >> >> Looking forward to seeing everyone's submissions! >> >> If you have questions or concerns about the process, please don't >> hesitate to reach out. >> >> Cheers, >> Jimmy >> >> _______________________________________________ >> Openstack-track-chairs mailing list >> Openstack-track-chairs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-track-chairs > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Sep 24 15:19:59 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 24 Sep 2018 10:19:59 -0500 Subject: [openstack-dev] [Forum] Forum Topic Submission Period - Time Running out! Message-ID: <5BA9009F.6000405@openstack.org> Just a reminder that there is a little more than 60 hours left to submit your forum topics. Please make haste to the Forum submission tool: https://www.openstack.org/summit/berlin-2018/call-for-presentations Cheers, Jimmy > Jimmy McArthur > September 17, 2018 at 11:13 AM > Hello Everyone! > > The Forum Topic Submission session started September 12 and will run > through September 26th. Now is the time to wrangle the topics you > gathered during your Brainstorming Phase and start pushing forum > topics through. Don't rely only on a PTL to make the agenda... step on > up and place the items you consider important front and center. > > As you may have noticed on the Forum Wiki > (https://wiki.openstack.org/wiki/Forum), we're reusing the normal CFP > tool this year. We did our best to remove Summit specific language, > but if you notice something, just know that you are submitting to the > Forum. URL is here: > > https://www.openstack.org/summit/berlin-2018/call-for-presentations > > Looking forward to seeing everyone's submissions! > > If you have questions or concerns about the process, please don't > hesitate to reach out. > > Cheers, > Jimmy > > _______________________________________________ > Openstack-track-chairs mailing list > Openstack-track-chairs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-track-chairs -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Sep 24 16:10:44 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 24 Sep 2018 12:10:44 -0400 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! In-Reply-To: <1537479809-sup-898@lrrr.local> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <1537479809-sup-898@lrrr.local> Message-ID: <3a9343d7-6e80-e793-4f8b-779504d732c6@redhat.com> On 20/09/18 5:46 PM, Doug Hellmann wrote: > Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +0000: >> tl;dr: The openstack, openstack-dev, openstack-sigs and >> openstack-operators mailing lists (to which this is being sent) will >> be replaced by a new openstack-discuss at lists.openstack.org mailing >> list. > > Since last week there was some discussion of including the openstack-tc > mailing list among these lists to eliminate confusion caused by the fact > that the list is not configured to accept messages from all subscribers > (it's meant to be used for us to make sure TC members see meeting > announcements). > > I'm inclined to include it and either use a direct mailing or the > [tc] tag on the new discuss list to reach TC members, but I would > like to hear feedback from TC members and other interested parties > before calling that decision made. Please let me know what you think. +1. I already sort mail to the -tc list and mail to the -dev list with the [tc] tag into the same mailbox, so the value to me of having a list that only TC members can post to without moderation is zero. - ZB From jungleboyj at gmail.com Mon Sep 24 16:20:27 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 24 Sep 2018 11:20:27 -0500 Subject: [openstack-dev] [cinder][forum] Need Topics for Berlin Forum ... Message-ID: <4a623475-2736-f220-727e-47b090fa78ab@gmail.com> Team, Just a reminder that we have an etherpad to plan topics for the Forum in Berlin [1].  We are short on topics right now so please take some time to think about what we should talk about.  I am also planning time for this discussion during our Wednesday meeting this week. Thanks for taking time to consider topics! Jay (jungleboyj) [1] https://etherpad.openstack.org/p/cinder-berlin-forum-proposals From colleen at gazlene.net Mon Sep 24 16:27:05 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 24 Sep 2018 18:27:05 +0200 Subject: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs In-Reply-To: References: <1537790427.1265517.1518561608.5261E953@webmail.messagingengine.com> <1537799487.1314295.1518727168.5B68A73B@webmail.messagingengine.com> Message-ID: <1537806425.3572033.1518879032.2408F47C@webmail.messagingengine.com> On Mon, Sep 24, 2018, at 4:35 PM, Lance Bragstad wrote: > On Mon, Sep 24, 2018 at 9:31 AM Colleen Murphy wrote: > > > On Mon, Sep 24, 2018, at 4:16 PM, Lance Bragstad wrote: > > > On Mon, Sep 24, 2018 at 7:00 AM Colleen Murphy > > wrote: > > > > > > > This is in regard to https://launchpad.net/bugs/1641625 and the > > proposed > > > > patch https://review.openstack.org/588211 for it. Thanks Vishakha for > > > > getting the ball rolling. > > > > > > > > tl;dr: Keystone as an IdP should support sending > > > > non-strings/lists-of-strings as user attribute values, specifically > > lists > > > > of keystone groups, here's how that might happen. > > > > > > > > Problem statement: > > > > > > > > When keystone is set up as a service provider with an external > > > > non-keystone identity provider, it is common to configure the mapping > > rules > > > > to accept a list of group names from the IdP and map them to some > > property > > > > of a local keystone user, usually also a keystone group name. When > > keystone > > > > acts as the IdP, it's not currently possible to send a group name as a > > user > > > > property in the assertion. There are a few problems: > > > > > > > > 1. We haven't added any openstack_groups key in the creation of the > > > > SAML assertion ( > > > > > > http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164 > > > > ). > > > > 2. If we did, this would not be enough. Unlike other IdPs, in > > keystone > > > > there can be multiple groups with the same name, namespaced by domain. > > So > > > > it's not enough for the SAML AttributeStatement to contain a > > > > semi-colon-separated list of group names, since a user could > > theoretically > > > > be a member of two or more groups with the same name. > > > > * Why can't we just send group IDs, which are unique? Because two > > > > different keystones are not going to have independent groups with the > > same > > > > UUID, so we cannot possibly map an ID of a group from keystone A to > > the ID > > > > of a different group in keystone B. We could map the ID of the group > > in in > > > > A to the name of a group in B but then operators need to create groups > > with > > > > UUIDs as names which is a little awkward for both the operator and the > > user > > > > who now is a member of groups with nondescriptive names. > > > > 3. If we then were able to encode a complex type like a group dict > > in > > > > a SAML assertion, we'd have to deal with it on the service provider > > side by > > > > being able to parse such an environment variable from the Apache > > headers. > > > > 4. The current mapping rules engine uses basic python string > > > > formatting to translate remote key-value pairs to local rules. We would > > > > need to change the mapping API to work with values more complex than > > > > strings and lists of strings. > > > > > > > > Possible solution: > > > > > > > > Vishakha's patch (https://review.openstack.org/588211) starts to solve > > > > (1) but it doesn't go far enough to solve (2-4). What we talked about > > at > > > > the PTG was: > > > > > > > > 2. Encode the group+domain as a string, for example by using the > > dict > > > > string repr or a string representation of some custom XML and maybe > > base64 > > > > encoding it. > > > > * It's not totally clear whether the AttributeValue class of > > the > > > > pysaml2 library supports any data types outside of the xmlns:xs > > namespace > > > > or whether nested XML is an option, so encoding the whole thing as an > > > > xs:string seems like the simplest solution. > > > > > > > > > > Encoding this makes sense. We can formally support different SAML data > > > types in the future if a better solution comes along. We would have to > > make > > > the service provider deal with both types of encoding, but we could > > > eventually consolidate, and users shouldn't know the difference. Right? > > > > The only way this would make a difference to the user is if they need to > > debug a request by actually looking at the response to this request[1]. If > > we were to base64-encode the string that immediately obfuscates what the > > actual value is. I'm not really sure if we need to base64-encode it or just > > serialize it some other way. > > > > Oh - yeah that makes sense. In your opinion, does that prevent us from > adopting another way of solving the problem if we find a better data type? Not 100% sure. The format of the SAML assertion is part of our API so we do have to be really careful about changing it, that's why I nacked the current patch. But how much leeway we have might depend on what the alternate solution is: maybe if the end result changes the NameFormat or the xsi:type of the Attribute, and we still support the xsi:type="xs:string" solution (the one we're discussing now), that might be okay? > > > > > > [1] > > https://developer.openstack.org/api-ref/identity/v3-ext/index.html#id404 > > > > > > > > > > 3. The SP will have to be aware that openstack_groups is a special > > key > > > > that needs the encoding reversed. > > > > * I wrote down "MultiDict" in my notes but I don't recall > > exactly > > > > what format the environment variable would take that would make a > > MultiDict > > > > make sense here, in any case I think encoding the whole thing as a > > string > > > > eliminates the need for this. > > > > 4. We didn't talk about the mapping API, but here's what I think. > > If > > > > we were just talking about group names, the mapping API today would > > work > > > > like this (slight oversimplification for brevity): > > > > > > > > Given a list of openstack_groups like ["A", "B", "C"], it would work > > like > > > > this: > > > > > > > > [ > > > > { > > > > "local": > > > > [ > > > > { > > > > "group": > > > > { > > > > "name": "{0}", > > > > "domain": > > > > { > > > > "name": "federated_domain" > > > > } > > > > } > > > > } > > > > ], "remote": > > > > [ > > > > { > > > > "type": "openstack_groups" > > > > } > > > > ] > > > > } > > > > ] > > > > (paste in case the spacing makes this unreadable: > > > > http://paste.openstack.org/show/730623/ ) > > > > > > > > But now, we no longer have a list of strings but something more like > > > > [{"name": "A", "domain_name": "Default"} {"name": "B", "domain_name": > > > > "Default", "name": "A", "domain_name": "domainB"}]. Since {0} isn't a > > > > string, this example doesn't really work. Instead, let's assume that in > > > > step (3) we converted the decoded AttributeValue text to an object. > > Then > > > > the mapping could look more like this: > > > > > > > > [ > > > > { > > > > "local": > > > > [ > > > > { > > > > "group": > > > > { > > > > "name": "{0.name}", > > > > "domain": > > > > { > > > > "name": "{0.domain_name}" > > > > } > > > > } > > > > } > > > > ], "remote": > > > > [ > > > > { > > > > "type": "openstack_groups" > > > > } > > > > ] > > > > } > > > > ] > > > > (paste: http://paste.openstack.org/show/730622/ ) > > > > > > > > > > > I can't come up with a reason not to do this at the moment. If we > > serialize > > > the group+domain name information in SAML, then it seems appropriate to > > > teach the service provider how to deserialize it and apply it to > > mappings. > > > Otherwise, does it make sense to build serialization into the identity > > > provider if we aren't going to use the domain name? > > > > > > > > > > Alternatively, we could forget about the namespacing problem and simply > > > > say we only pass group names in the assertion, and if you have > > ambiguous > > > > group names you're on your own. We could also try to support both, e.g. > > > > have an openstack_groups mean a list of group names for simpler use > > cases, > > > > and openstack_groups_unique mean the list of encoded group+domain > > strings > > > > for advanced use cases. > > > > > > > > Finally, whatever we decide for groups we should also apply to > > > > openstack_roles which currently only supports global roles and not > > > > domain-specific roles. > > > > > > > > (It's also worth noting, for clarity, that the samlize function does > > > > handle namespaced projects, but this is because it's retrieving the > > project > > > > from the token and therefore there is only ever one project and one > > project > > > > domain so there is no ambiguity.) > > > > > > > > Thoughts? > > > > > > > > - Colleen (cmurphy) > > > > > > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Mon Sep 24 16:33:56 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 24 Sep 2018 11:33:56 -0500 Subject: [openstack-dev] [oslo] upgradecheck core list populated Message-ID: Hey, The oslo.upgradecheck core list is now populated. oslo-core and Matt Riedemann should have +2 rights on it, so review away! :-) -Ben From openstack at nemebean.com Mon Sep 24 16:41:52 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 24 Sep 2018 11:41:52 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-29 Update In-Reply-To: <39a23c25-eed0-fa1e-0afd-14465f35ee14@gmail.com> References: <2ce541ea-7f2c-6d12-2831-3a658e69e52e@gmail.com> <39a23c25-eed0-fa1e-0afd-14465f35ee14@gmail.com> Message-ID: On 09/22/2018 11:15 AM, Matt Riedemann wrote: > On 9/21/2018 4:19 PM, Ben Nemec wrote: >>> * The only two projects that I'm aware of with patches up at this >>> point are monasca [2] and designate [3]. The monasca one is tricky >>> because as I've found going through release notes for some projects, >>> they don't really have any major upgrade impacts so writing checks is >>> not obvious. I don't have a great solution here. What monasca has >>> done is add the framework with a noop check. If others are in the >>> same situation, I'd like to hear your thoughts on what you think >>> makes sense here. The alternative is these projects opt out of the >>> goal for Stein and just add the check code later when it makes sense >>> (but people might forget or not care to do that later if it's not a >>> goal). >> >> My inclination is for the command to exist with a noop check, the main >> reason being that if we create it for everyone this cycle then the >> deployment tools can implement calls to the status commands all at >> once. If we wait until checks are needed then someone has to not only >> implement it in the service but also remember to go update all of the >> deployment tools. Implementing a noop check should be pretty trivial >> with the library so it isn't a huge imposition. > > Yeah, I agree, and I've left comments on the patch to give some ideas on > how to write the noop check with a description that explains it's an > initial check but doesn't really do anything. The alternative would be > to dump the table header for the results but then not have any rows, > which could be more confusing. > +1 to "this page intentionally left blank", hopefully without the logical contradiction those pages always create. ;-) From jaypipes at gmail.com Mon Sep 24 17:12:21 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 24 Sep 2018 13:12:21 -0400 Subject: [openstack-dev] [penstack-dev]Discussion about the future of OpenStack in China In-Reply-To: References: Message-ID: <65bb8c01-dda8-601e-786e-9a998a99ddeb@gmail.com> Fred, I had a hard time understanding the articles. I'm not sure if you used Google Translate to do the translation from Chinese to English, but I personally found both of them difficult to follow. There were a couple points that I did manage to decipher, though. One thing that both articles seemed to say was that OpenStack doesn't meet public (AWS-ish) cloud use cases and OpenStack doesn't compare favorably to VMWare either. Is there a large contingent of Chinese OpenStack users that expect OpenStack to be a free (as in beer) version of VMware technology? What are the 3 most important features that Chinese OpenStack users would like to see included in OpenStack projects? Thanks, -jay On 09/24/2018 11:10 AM, Fred Li wrote: > Hi folks, > > Recently there are several blogs which discussed about the future of > OpenStack. If I was not wrong, the first one is > "OpenStack-8-year-itch"[1], and you can find its English version > attached. Thanks to google translation. The second one is > "5-years-my-opinion-on-OpenStack" [2] with English version attached as > well. Please translate the 3 to 6 and read them if you are interested. > > I don't want to judge anything here. I just want to share as they are > quite hot discussion and I think it is valuable for the whole community, > not part of community to know. > > [1] https://mp.weixin.qq.com/s/GM5cMOl0q3hb_6_eEiixzA > [2] https://mp.weixin.qq.com/s/qZkE4o_BHBPlbIjekjDRKw > [3] https://mp.weixin.qq.com/s/svX4z3JM5ArQ57A1jFoyLw > [4] https://mp.weixin.qq.com/s/Nyb0OxI2Z7LxDpofTTyWOg > [5] https://mp.weixin.qq.com/s/5GV4i8kyedHSbCxCO1VBRw > [6] https://mp.weixin.qq.com/s/yeBcMogumXKGQ0KyKrgbqA > -- > Regards > Fred Li (李永乐) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jdennis at redhat.com Mon Sep 24 18:40:17 2018 From: jdennis at redhat.com (John Dennis) Date: Mon, 24 Sep 2018 14:40:17 -0400 Subject: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs In-Reply-To: <1537790427.1265517.1518561608.5261E953@webmail.messagingengine.com> References: <1537790427.1265517.1518561608.5261E953@webmail.messagingengine.com> Message-ID: <48f4ddf1-3d93-340e-3ad2-11bc4ef004ef@redhat.com> On 9/24/18 8:00 AM, Colleen Murphy wrote: > This is in regard to https://launchpad.net/bugs/1641625 and the proposed patch https://review.openstack.org/588211 for it. Thanks Vishakha for getting the ball rolling. > > tl;dr: Keystone as an IdP should support sending non-strings/lists-of-strings as user attribute values, specifically lists of keystone groups, here's how that might happen. > > Problem statement: > > When keystone is set up as a service provider with an external non-keystone identity provider, it is common to configure the mapping rules to accept a list of group names from the IdP and map them to some property of a local keystone user, usually also a keystone group name. When keystone acts as the IdP, it's not currently possible to send a group name as a user property in the assertion. There are a few problems: > > 1. We haven't added any openstack_groups key in the creation of the SAML assertion (http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164). > 2. If we did, this would not be enough. Unlike other IdPs, in keystone there can be multiple groups with the same name, namespaced by domain. So it's not enough for the SAML AttributeStatement to contain a semi-colon-separated list of group names, since a user could theoretically be a member of two or more groups with the same name. > * Why can't we just send group IDs, which are unique? Because two different keystones are not going to have independent groups with the same UUID, so we cannot possibly map an ID of a group from keystone A to the ID of a different group in keystone B. We could map the ID of the group in in A to the name of a group in B but then operators need to create groups with UUIDs as names which is a little awkward for both the operator and the user who now is a member of groups with nondescriptive names. > 3. If we then were able to encode a complex type like a group dict in a SAML assertion, we'd have to deal with it on the service provider side by being able to parse such an environment variable from the Apache headers. > 4. The current mapping rules engine uses basic python string formatting to translate remote key-value pairs to local rules. We would need to change the mapping API to work with values more complex than strings and lists of strings. > > Possible solution: > > Vishakha's patch (https://review.openstack.org/588211) starts to solve (1) but it doesn't go far enough to solve (2-4). What we talked about at the PTG was: > > 2. Encode the group+domain as a string, for example by using the dict string repr or a string representation of some custom XML and maybe base64 encoding it. > * It's not totally clear whether the AttributeValue class of the pysaml2 library supports any data types outside of the xmlns:xs namespace or whether nested XML is an option, so encoding the whole thing as an xs:string seems like the simplest solution. > 3. The SP will have to be aware that openstack_groups is a special key that needs the encoding reversed. > * I wrote down "MultiDict" in my notes but I don't recall exactly what format the environment variable would take that would make a MultiDict make sense here, in any case I think encoding the whole thing as a string eliminates the need for this. > 4. We didn't talk about the mapping API, but here's what I think. If we were just talking about group names, the mapping API today would work like this (slight oversimplification for brevity): > > Given a list of openstack_groups like ["A", "B", "C"], it would work like this: > > [ > { > "local": > [ > { > "group": > { > "name": "{0}", > "domain": > { > "name": "federated_domain" > } > } > } > ], "remote": > [ > { > "type": "openstack_groups" > } > ] > } > ] > (paste in case the spacing makes this unreadable: http://paste.openstack.org/show/730623/ ) > > But now, we no longer have a list of strings but something more like [{"name": "A", "domain_name": "Default"} {"name": "B", "domain_name": "Default", "name": "A", "domain_name": "domainB"}]. Since {0} isn't a string, this example doesn't really work. Instead, let's assume that in step (3) we converted the decoded AttributeValue text to an object. Then the mapping could look more like this: > > [ > { > "local": > [ > { > "group": > { > "name": "{0.name}", > "domain": > { > "name": "{0.domain_name}" > } > } > } > ], "remote": > [ > { > "type": "openstack_groups" > } > ] > } > ] > (paste: http://paste.openstack.org/show/730622/ ) > > Alternatively, we could forget about the namespacing problem and simply say we only pass group names in the assertion, and if you have ambiguous group names you're on your own. We could also try to support both, e.g. have an openstack_groups mean a list of group names for simpler use cases, and openstack_groups_unique mean the list of encoded group+domain strings for advanced use cases. > > Finally, whatever we decide for groups we should also apply to openstack_roles which currently only supports global roles and not domain-specific roles. > > (It's also worth noting, for clarity, that the samlize function does handle namespaced projects, but this is because it's retrieving the project from the token and therefore there is only ever one project and one project domain so there is no ambiguity.) > A few thoughts to help focus the discussion: * Namespacing is critical, no design should be permitted which allows for ambiguous names. Ambiguous names are a security issue and can be used by an attacker. The SAML designers recognized the importance to disambiguate names. In SAML names are conveyed inside a NameIdentifier element which (optionally) includes "name qualifier" attributes which in SAML lingo is a namespace name. * SAML does not define the format of an attribute value. You can use anything you want as long as it can be expressed in valid XML as long as the cooperating parties know how to interpret the XML content. But herein lies the problem. Very few SAML implementations know how to consume an attribute value other than a string. In the real world, despite what the SAML spec says is permitted is the constraint attribute values is a string. * I haven't looked at the pysaml implementation but I'd be surprised if it treated attribute values as anything other than a string. In theory it could take any Python object (or JSON) and serialize it into XML but you would still be stuck with the receiver being unable to parse the attribute value (see above point). * You can encode complex data in an attribute value while only using a simple string. The only requirement is the relying party knowing how to interpret the string value. Note, this is distinctly different than using non-string attribute values because of who is responsible for parsing the value. If you use a non-string attribute value the SAML library need to know how to parse it, none or very few will know how to process that element. But if it's a string value the SAML library will happily pass that string back up to the application who can then interpret it. The easiest way to embed complex data in a string is with JSON, we do it all the time, all over the place in OpenStack. [1][2] So my suggestion would be to give the attribute a meaningful name. Define a JSON schema for the data and then let the upper layers decode the JSON and operate on it. This is no different than any other SAML attribute passed as a string, the receive MUST know how to interpret the string value. [1] We already pass complex data in a SAML attribute string value. We permit a comma separated list of group names to appear in the 'groups' mapping rule (although I don't think this feature is documented in our mapping rules documentation). The receiver (our mapping engine) has hard-coded logic to look for a list of names. [2] We might want to prepend a format specifier to string containing complex data, e.g. "JSON:{json object}". Our parser could then look for a leading format tag and if if finds one strip it off and pass the rest of the string into the proper parser. -- John Dennis From doug at doughellmann.com Mon Sep 24 18:59:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 24 Sep 2018 14:59:40 -0400 Subject: [openstack-dev] [release][ironic][tripleo][oslo][neutron] recovering from today's release failures Message-ID: Earlier today a bad version of twine (1.12.0) caused issues with many of the release jobs that were trying to publish artifacts to pypi.python.org. I didn't notice those failures until after all of the releases had been processed, unfortunately, which left us with quite a few broken releases to try to recover from. After working out the cause of the problem, and finding that in fact some, but not all, of the release artifacts were published, we started making a list of the manual recovery steps we would have to take. And that list was really really long. So, instead of doing all of that, we're going to re-tag the releases, using new version numbers, to produce good artifacts. I have prepared https://review.openstack.org/#/c/604875/ to do this. I will also run some tests to ensure the publishing works before approving those re-releases. Sorry for the inconvenience, Doug From mriedemos at gmail.com Mon Sep 24 19:06:21 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 24 Sep 2018 14:06:21 -0500 Subject: [openstack-dev] [glance][upgrade-checkers] Question about glance rocky upgrade release note Message-ID: <5fa6be15-bf93-2f74-7799-2d602b96f363@gmail.com> Looking at the upgrade-checkers goal [1] for glance and the Rocky upgrade release notes [2], one upgrade note says: "As Image Import will be always enabled, care needs to be taken that it is configured properly from this release forward. The ‘enable_image_import’ option is silently ignored." Are there more specific docs about how to configure the 'image import' feature so that I can be sure I'm careful? In other words, are there specific things a "glance-status upgrade check" check could look at and say, "your image import configuration is broken, here are details on how you should do this"? I'm willing to help write the upgrade check for glance, but need more details on that release note. [1] https://storyboard.openstack.org/#!/story/2003657 [2] https://docs.openstack.org/releasenotes/glance/rocky.html#upgrade-notes -- Thanks, Matt From dharmendra.kushwaha at india.nec.com Mon Sep 24 19:12:39 2018 From: dharmendra.kushwaha at india.nec.com (Dharmendra Kushwaha) Date: Mon, 24 Sep 2018 19:12:39 +0000 Subject: [openstack-dev] [Tacker] Cancelling weekly meeting this week. Message-ID: Dear Tacker members, We have discussed most of the thing during PTG on Friday, So cancelling weekly meeting for this week. Please focus on discussed specs & code. Thanks & Regards Dharmendra Kushwaha -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Sep 24 19:13:08 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 24 Sep 2018 14:13:08 -0500 Subject: [openstack-dev] [glance][upgrade-checkers] Question about glance rocky upgrade release note In-Reply-To: <5fa6be15-bf93-2f74-7799-2d602b96f363@gmail.com> References: <5fa6be15-bf93-2f74-7799-2d602b96f363@gmail.com> Message-ID: <651d4eb2-f838-12aa-867e-29928dc993c1@gmail.com> On 9/24/2018 2:06 PM, Matt Riedemann wrote: > Are there more specific docs about how to configure the 'image import' > feature so that I can be sure I'm careful? In other words, are there > specific things a "glance-status upgrade check" check could look at and > say, "your image import configuration is broken, here are details on how > you should do this"? I guess this answers the question about docs: https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html Would a basic upgrade check be such that if glance-api.conf contains enable_image_import=False, you're going to have issues since that option is removed in Rocky? -- Thanks, Matt From melwittt at gmail.com Mon Sep 24 21:00:44 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 24 Sep 2018 14:00:44 -0700 Subject: [openstack-dev] [nova] summit forum topic submission deadline Wed Sep 26 Message-ID: <484fdacd-e8a4-8acb-8380-bf23fa5dc406@gmail.com> Hey everyone, This is a reminder that the deadline for submitting summit forum topics is coming up soon in two days, Wed Sep 26. Please see our forum topic etherpad for links to forum information and instructions on how to submit a topic: https://etherpad.openstack.org/p/nova-forum-stein Cheers, -melanie From doug at doughellmann.com Mon Sep 24 21:06:56 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 24 Sep 2018 17:06:56 -0400 Subject: [openstack-dev] [release][ironic][tripleo][oslo][neutron] recovering from today's release failures In-Reply-To: References: Message-ID: Doug Hellmann writes: > Earlier today a bad version of twine (1.12.0) caused issues with many of > the release jobs that were trying to publish artifacts to > pypi.python.org. I didn't notice those failures until after all of the > releases had been processed, unfortunately, which left us with quite a > few broken releases to try to recover from. > > After working out the cause of the problem, and finding that in fact > some, but not all, of the release artifacts were published, we started > making a list of the manual recovery steps we would have to take. And > that list was really really long. > > So, instead of doing all of that, we're going to re-tag the releases, > using new version numbers, to produce good artifacts. I have prepared > https://review.openstack.org/#/c/604875/ to do this. > > I will also run some tests to ensure the publishing works before > approving those re-releases. > > Sorry for the inconvenience, > Doug We've had a few release requests come in since this email, so I want to make sure everyone understands that we're going to wait to sort out the issues we already have before releasing anything else. The test release is waiting in the check queue still, so it's likely to be a while before we know for sure that things are working. Doug From openstack at nemebean.com Mon Sep 24 21:30:09 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 24 Sep 2018 16:30:09 -0500 Subject: [openstack-dev] [storyboard] Prioritization? Message-ID: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> Hi, This came up in the Oslo meeting as a result of my initial look at the test Storyboard import. It appears all of the priority data from launchpad gets lost in the migration, which is going to make organizing hundreds of bugs somewhat difficult. I'm particularly not fond of it after spending last cycle whittling down our untriaged bug list. :-) Work lists and tags were mentioned as possible priority management tools in Storyboard, so is there some way to migrate launchpad priorities into one of those automatically? If not, are there any plans to add that? It sounded like a similar conversation is happening with a few other teams so we wanted to bring the discussion to the mailing list for visibility. Thanks. -Ben From openstack at nemebean.com Mon Sep 24 21:38:12 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 24 Sep 2018 16:38:12 -0500 Subject: [openstack-dev] [storyboard][oslo] Fewer stories than bugs? Message-ID: <61799e53-2fa6-40a7-ebbd-a1f3df624a8f@nemebean.com> This is a more oslo-specific (maybe) question that came out of the test migration. I noticed that launchpad is reporting 326 open bugs across the Oslo projects, but in Storyboard there are only 266 stories created. While I'm totally onboard with reducing our bug backlog, I'm curious why that is the case. I'm speculating that maybe Launchpad counts bugs that affect multiple Oslo projects as multiple bugs whereas Storyboard is counting them as a single story? I think we were also going to skip https://bugs.launchpad.net/openstack-infra which for some reason appeared in the oslo group, but that's only two bugs so it doesn't account for anywhere near the full difference. Mostly I just want to make sure we didn't miss something. I'm hoping this is a known behavior and we don't have to start comparing bug lists to find the difference. :-) Thanks. -Ben From miguel at mlavalle.com Mon Sep 24 21:38:59 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 24 Sep 2018 16:38:59 -0500 Subject: [openstack-dev] [Neutron] Stein PTG Summary Message-ID: Dear Neutron team, Thank you very much for your hard work during the PTG in Denver, Thanks to your efforts, we had a very productive week and we planned and prioritized a lot of work to be done during the Stein cycle. Following below is a high level summary of the discussions we had. If there is something I left out, please reply to this email thread to add it. However, if you want to continue the discussion on any of the individual points summarized below, please start a new thread, so we don't have a lot of conversations going on attached to this update. You can find the etherpad we used during the PTG meetings here: https://etherpad.openstack.org/p/neutron-stein-ptg. Retrospective ========== * The following blueprints were not finished during Rocky and have been rolled over to Stein: - Policy in code is now targeted for Stein-1. Akihiro Amotoki is in charge of this implementation: https://blueprints.launchpad.net/neutron/+spec/get-policy-from-neutron-lib - Strict minimum bandwidth support (scheduling aware) is targeted for Stein-2. The assignees on the Neutron side are Bence Romsics and Lajos Katona. Balazs Gibizer is the assignee on the Nova side. There are no blockers for this feature to be implemented in either side. https://blueprints.launchpad.net/neutron/+spec/strict-minimum-bandwidth-support - Enable adoption of an existing subnet into a subnetpool is targeted for Stein-2. Bernard Cafarelli has taken over the implementation of this blueprint: https://blueprints.launchpad.net/neutron/+spec/subnet-onboard - Decoupling database imports/access for neutron-lib. The limiting factor in this effort is reviews velocity both in Neutron and the out of tree related projects. Attention will be placed to review patches promptly, to try to finish this effort in Stein: https://blueprints.launchpad.net/neutron/+spec/neutron-lib-decouple-db * One area of concern at the end of Rocky is how to reduce the likelihood of patches merged in Neutron breaking Stadium and networking projects - The approach to be adopted is to add one non-voting job per Stadium project to the Neutron check queue. Miguel Lavalle will send a message to the ML asking for job proposals from the Stadium projects - Non Stadium networking projects are also invited to add 3rd party CI jobs, similar to what is done in projects such as Cinder. An example patch is here: https://review.openstack.org/#/c/604382/. Boden Russell indicated he will follow this approach in the case of openstack/vmware-nsx. - Another alternative that was considered was to release Neutron once a month so Stadium and networking projects can quickly go back to a closer stable point. The consensus of the team, though, gravitated more towards non voting Stadium and 3rf party CI jobs as described in the previous two points. - Miguel Lavalle will update the code reviews section of the documentation ( https://github.com/openstack/neutron/blob/master/doc/source/contributor/policies/code-reviews.rst) with guidelines on how to use the http://codesearch.openstack.org/ on line tool to spot impacts of Neutron patches in the Stadium and other related projects SR-IOV VF to VF mirroring ==================== * Blueprint https://blueprints.launchpad.net/neutron/+spec/port-mirroring-sriov-vf proposes to add to Neutron and TaaS (Tap-as-a-Service, https://github.com/openstack/tap-as-a-service) the capability to do SR-IOV VF to VF mirroring. - A demo of how this can be implemented is here https://etherpad.net/p/taas_sriov_demo_stein_ptg and here https://bluejeans.com/s/zLiRn/ - The spec for this effort (https://review.openstack.org/#/c/574477) proposes to implement TaaS agent and driver to support SR-IOV VF to VF mirroring. This implies the implementation of a framework within TaaS to manage several types of agents - The spec also proposes that the way to specify vlans to be mirrored will be a "vlan_mirror_list" field in the binding profile of the port associated to the TaaS Tap Service. There was feedback from the room that the vlans may be specified instead in the TaaS Tap Flow. Two alternatives were suggested. The first one is to add to the Tap Flow a "vlan_filter" attribute. The second one is to add to the Tap Flow the UUID of a classifier, which can be from the CCF (Common Classifier Framework) or, if CCF is not ready, from a classifier developed in TaaS. These alternatives will be discussed in the spec, to give developers in the broader community the opportunity to influence the decision - To support this blueprint, and effort will be made to make TaaS ready for the Neutron Stadium of projects. The guidelines and checklist to be used to assess TaaS readiness for the Stadium are outlined here: https://docs.openstack.org/neutron/latest/contributor/stadium/governance.html. Miguel Lavalle and Munish Mehan will work on a patch to conduct an assessment similar to this one: https://review.openstack.org/#/c/506012/ Testing multinode datapath with Skydive ============================= * Miguel Ajo and Daniel Alvarez presented a demo of Skydive: https://github.com/skydive-project/skydive and http://skydive.network/documentation * Skydive is an open source real-time network topology and protocols analyzer. It aims to provide a comprehensive way of understanding what is happening in the network infrastructure. Skydive agents collect topology information and flows and forward them to a central agent for further analysis * The proposal is to transform the whitebox tests we have in tempest for DVR, or HA, which assume specific internal knowledge of the reference implementation, and instead, capture the specific traffic over each node with the skydive API and check the traffic is being handled where and how we want (DVR, HA routers, etc). * The demo script is here: https://etherpad.openstack.org/p/neutron-dataplane-testing * The demo was well received by the team. The next steps are: - Miguel Ajo to talk to Federico Ressi to agree on tests to be committed by the end of Stein - If they agree to commit tests by the end of Stein, Miguel Lavalle will create a blueprint to track the effort L3 topics ======= * Swaminathan Vasudevan presented remotely to the team the alternatives to fix the long standing bug https://bugs.launchpad.net/neutron/+bug/1774459, Update permanent ARP entries for allowed_address_pair IPs in DVR Routers. - This bug refers to allowed_address_pairs IP associated with unbound ports and DVR routers. The ARP entry for the allowed_address_pair IP does not change based on the GARP issued by any Keepalived instance, since DVR does the ARP table update through the control plane, and does not allow any ARP requests to get out of the node. - Swami was seeking the advice from people with more L2 / OpenFlow knowledge on how to address this issue - With significant input from Miguel Ajo and Daniel Alvarez, the agreed upon plan is to intercept GARP packets and forward them to the local controller for processing. Basically flows will be programmed dynamically when a GARP is recognized by the controller * Next L3 topic discussed was the automatic re-balancing of DHCP agents and L3 routers. Frequently, after starting and stopping nodes during normal operations, some network nodes might end up overloaded hosting DHCP servers and L3 routers. The team agreed on the following approaches: - No automatic re-balancing, since they may lead to long transitions in the system - For HA routers, Keepalived priorities will be used to maintain the balance - For DVR centralized routers, legacy routers and DHCP agent, scripts to be executed by the Cloud admin will be the way to go. Operators can customize these scripts to better fit their needs - Neutron documentation will also be improved in this area * Miguel Lavalle presented https://bugs.launchpad.net/neutron/+bug/1789391, which proposes to implement VPC peering for Neutron routers. - Some private cloud offering (Huawei's among them) give the users the ability to implement AWS VPC functionality using Neutron routers - The next logical step may be VPC peeing by enabling Neutron routers to talk to each other. - The proposal was well received and accepted by the team. The next step is to write a spec to iron out the technical details neutron-lib topics ============ * The first topic was how to identify projects that are "current" and hence want the ongoing neutron-lib consumption patches created by Boden Russel - Up until now we look at the neutron-lib version in requirements.txt to see which projects are "up to date" and should get consumption patches. Becomes difficult to search and easily find these current consumers. - It was agreed that from now on, an active "opt in" mechanism will be implemented. It will consist of a comment in a project file or a project tag. Boden will send a message to the ML to outline the process * For those projects that opt in to neutron-lib consumption patches - Boden is willing to help them set up for Zuul V3 - https://etherpad.openstack.org/p/neutron-sibling-setup contains a list of "current" networking projects under "List of networking related projects that are current and what they are missing". All projects need local tox targets. Only two appear to require zuul updates - If projects have to pull Tempest tests to their own Tempest plugin, they will have to do it themselves. However, it appears all have moved their tempest code out of tree * The team discussed the possibility to test neutron changes with neutron-lib from master branch instead of the latest released version - Today we have the "dummy patch" approach https://docs.openstack.org/neutron-lib/latest/contributor/review-guidelines.html It seems this could be done by having the Neutron change tested with a dummy neutron-lib patch that depends on it; this would run the "neutron-src" zuul jobs on the dummy lib patch, using the respective neutron master patch - After further testing, it was discovered that the "dummy patch" approach is not working as expected. As a consequence, Boden has proposed the following patch to start testing Neutron patches with neutron-lib master: https://review.openstack.org/#/c/602748/ * The team agreed that a spec is needed on how to handle API definitions for extensions that extend a dynamic set of resources. Volunteers will be requested for this (see next bullet) * Nate Johnston indicated that developers that want to help are not sure where to start, and suggested the creation of a punch list of the remaining items to be migrated - Miguel Lavalle will send message to the ML asking for volunteers for the neutron-lib effort - If we get volunteers, Boden will update the punch list. A starting point could be to drive the "dynamic API extensions" topic mentioned in the previous bullet SmartNIC support ============= * Lianhao Lu and Isaku Yamahata submitted to the consideration of the team two specs to support smart nics in Neutron: https://review.openstack.org/#/c/595402/ and https://review.openstack.org/#/c/595512 * In the context of this topic, a SmartNIC is a card that runs OVS, which enables its remote control over the OVSDB protocol * The overall goal is to significantly increase the number of Ironic compute hosts that can be managed in a deployment. - The general idea is to create a “super OVS agent”, running in the OpenStack controller, that will have multiple threads configuring SmartNICs in the compute hosts using OVSDB, eliminating the need to have an agent in each host and eliminating, as a consequence, the use of communications over the RPC channel, which has been identified as a bottleneck in the number of compute hosts that a deployment can handle. - This approach can be extended to VMs based with no SmartNICs deployments, by configuring OVS in the compute hosts to be managed remotely over OVSDB by the “super OVS agent”. This would enable to increase the maximum number of compute hosts that Neutron can manage in a single deployment * The proposal was well received by the team and it was agreed that the next step is to add more detail to the specs under review, with the aim to clarify the technical details of the implementation - One that requires special clarification is how the proposed changes will impact the L2pop mechanism driver. This clarification might take place in a separate spec Ironic x-project discussion - Smartnics ============================ * In this session, the Ironic and Neutron teams got together to explore further using SmartNICs to increase the number of compute hosts that can be supported in one deployment (see above "SmartNIC support" topic) - The session was greatly facilitated by the fact that the Neutron team had already agreed the evening before to go ahead with SmartNICs support - There was some discussion as to how Neutron is going to discover the credentials that will be used by the “super OVS agent” to manage OVS in the SmartNICs. The alternatives considered were to include these credentials in the port binding profile or the use of ReST calls. The final decision between these alternatives will be made in the related specs: on the Ironic side https://review.openstack.org/#/c/582767/ and on the Neutron side https://review.openstack.org/#/c/595402/ and https://review.openstack.org/#/c/595512 - Julia Kreger will propose a joint Forum session for the Berlin Summit to review progress. The is to have the specs finished when that session takes place StarlingX feature upstream =================== * Almost all the morning on Thursday 13th was devoted to the review and discussion of StarlingX specs * In preparation for this session, several Neutron team members conducted on Monday 10th a "review sprint" of the specs submitted by the StarlingX team, providing feedback in Gerrit * We started this session with Matt Peters of Windriver giving an overview to the Neutron team of the goals and technical architecture of StarlingX project. This presentation, along with the feedback that the Neutron team provided beforehand in the specs, really went a long way on eliminating misunderstandings on both sides, paving the way to good agreements for both sides on all the specs: - Provider Network Management https://review.openstack.org/599980. Much of the problem with this spec was nomenclature mis-understanding. We all agreed to move ahead with this spec, adjusting the use of the “provider network” term, creating one new resource in the Neutron API to manage all the configuration options that will be managed by API calls, which will override values set in files. The Oslo team will be consulted on this approach - System Host Management https://review.openstack.org/599981. After clarifying the real requirement in this spec is the capability to set and agent administratively down, the team agreed to continue the development of this spec using the existing Boolean attribute admin_state_up in DHCP and L3 agents - Fault Management https://review.openstack.org/599982. The team agreed that this is not in the scope of Neutron The StarlingX agreed to drop this spec - Host Bindings of Provider Networks https://review.openstack.org/579410. The underlying need in this spec is the ability to handle more than one L2 agent running in a compute host. The StarlingX team agreed to address this need using existing Neutron facilities and features - Segment Range Management of Self-service Networks: https://review.openstack.org/579411. This spec will be dropped by the StarlingX and folded under Provider Network Management https://review.openstack.org/599980 above - Rescheduling of DHCP Servers and Routers: https://review.openstack.org/595978. Given that Neutron already has an API call to re-schedule routers and in light of the discussion on Wednesday about agents re-balancing, the StarlingX will reevaluate their proposed approach in this spec - ML2 connection auditing and monitoring https://review.openstack.org/#/c/589313. There is enough data in Neutron log files to enable external management systems (Nagios for example) to address this need, so this spec will be dropped. The StarlingX team might follow up with a request to improve log messages * Miguel Lavalle invited the StarlingX to add topics to the weekly team meeting in the "On demand agenda" section, whenever further conversation is deemed necessary. Miguel will also participate once a month in the StarlingX networking meeting Minimum bandwidth scheduling demo =========================== * After lunch, there was a live demo of the bandwidth based scheduling feature, conducted by Bence Romsics and Balazs Gibizer. - The demo was successful and it showed that the Nova and Neutron team are well on their way to finish the implementation of this feature during the Stein cycle Nova x-project discussion =================== * The session started with a discussion of the recently merged (Rocky cycle) multiple port binding feature - Sean Mooney of Redhat has been testing this feature by moving VMs across hosts with heterogeneous Neutron back-ends (OVS, Linuxbridge, etc.) - The current code provides 90% of the functionality needed but some gaps remain - Live migration between different firewall drivers works - Live migration between different OVS and Linux bridge almost works. These bugs have been filed: https://bugs.launchpad.net/neutron/+bug/1788012 and https://bugs.launchpad.net/neutron/+bug/1788009. Sean will open a bug for the fact that Nova libvirt driver does not use the bridge name from destination binding - Live migration fails between kernel vhost and vhost-user. This is because Nova sets the MTU for kernel vhost tap devices in libvirt xml but doesn't set the MTU in the XML for vhost user. Sean will file a bug and fix it in Nova - The agreement was that Sean will fix these bugs during the Stein cycle * Work on bandwidth based scheduling is progressing at a good pace and is expected to be finished by Stein-2. Currently, there are no blockers, either on the Nova or the Neutron side - There are specs for placement currently under review that originated in the work being done in bandwidth based scheduling: any traits support to allow modeling bandwidth requests for multi segment networks: https://review.openstack.org/#/c/565730 and https://review.openstack.org/#/c/565741/, sub-tree filter for GET /resource_providers to allow easier inventory handling for Neutron: https://review.openstack.org/#/c/595236/ and resource provider - request group mapping in allocation candidate to have a scalable way to figure out which RP provides resources for which Neutron port during server create: https://review.openstack.org/#/c/597601/ * Making Neutron the only network back-end for nova. e.g. deleting Nova Networks - There are still users (CERN) who need Nova Networks - Some progress can be made in Stein on cleaning up Nova unit/functional tests to stop using the nova-network specific stubs and move those over to using NeutronFixture. This will receive very low review attention * portvbindings v3 e.g. extending Neutron to return os-vif objects when binding a port - This is just to finish the original plan for os-vif - The agreement was that this should be done but with a very low priority in this cycle Cyborg x-project discussion ==================== *Sundar Nadathu presented a proposal to enable the joint management of NICs with FPGA capabilities: https://etherpad.openstack.org/p/fpga-networking. * An explanation of how ML2 mechanism drivers work was given to the Cyborg team. * The agreements were: - Sundar to submit a spec to develop a ML2 mechanism driver to handle the binding of Neutron ports with this type of cards - Create a project under Neutron governance, possibly named networking-fpga, to be the repository for the mechanism driver mentioned in the previous point Python-3 goal tests ============== * Nate Johnston is leading the Neutron effort to comply with this community goal: https://governance.openstack.org/tc/goals/stein/python3-first.html * The decisions made during this session were: - We will run unit and functional in python 2.7 during Stein. In the T cycle we will get rid of the functional job. In the U cycle, we will let go of Python 2.7 completely - We will switch all jobs to py36 - Make openstack-tox-py36 to be voting and gating job - A message will be sent to the mailing list offering Nate's assistance to the Neutron Stadium projects with the Python 3 transition Neutron upgrades/OVO ================= * OVO adoption continues making steady progress - More contributors are welcome. The backlog is kept here: https://docs.google.com/spreadsheets/d/1Mz7V40GSdcUH_aBoNWjsFaNRp4wf-duz1GFWligiNXc/edit#gid=1051788848 - The best document for new contributors looking to become familiar with OVO is here: https://docs.openstack.org/neutron/latest/contributor/internals/objects_usage.html * The OVO sub-team has accepted to also continue the adoption of the new engine facade ( https://blueprints.launchpad.net/neutron/+spec/enginefacade-switch), since both efforts work in the DB layer and entail traversing the code looking for opportunities of adoption - Neutron OVO objects are implemented with support for the new engine facade in the base class but it is currently disabled globally. It will be enabled on a per object basis SNAT logging extension ================= * Yushiro Furukawa proposed to extend the work that has been done so far in logging (security groups and FWaaS) to SNAT * The team agreed that SNAT is a sensible next step for logging * The work plan is the following: - Migrate libnetfilter_log from neutron-fwaas into neutron-lib (in order to call this driver from SNAT logging): https://review.openstack.org/#/c/603310. The target for this is Stein-1 - RPC, agent extension, doc and CLI implementation in Neutron targeted for Stein-2 - Testing is targeted for Stein-3 Performance and scalability ==================== * Nate Johnston will implement the "Speed up Neutron port bulk creation" blueprint: https://blueprints.launchpad.net/neutron/+spec/speed-up-neutron-bulk-creation - This is in support of certain kuryr use cases * Slowness in the Neutron API. Slawek Kaplonski shared with the team the response time of Neutron ReST API requests across several check queue jobs. For each job, results are shown for a timed out and a successful run: neutron-tempest-dvr - http://paste.openstack.org/show/729482/, neutron-tempest-iptables_hybrid - http://paste.openstack.org/show/729485/, neutron-tempest-linuxbridge - http://paste.openstack.org/show/729487/, neutron-tempest-plugin-dvr-multinode-scenario - http://paste.openstack.org/show/729488/ and tempest-full-py3 - http://paste.openstack.org/show/729489/. The overall conclusion is that a small number of requests seem to be slower than expected and show up in all the jobs analysis. As a result, the following decisions were made: - Create a performance sub-team that meets regularly (once a month in principle) to review and identify problematic API requests. Miguel Lavalle will schedule these meetings - The performance sub-team will define thresholds for acceptable response times, will encode them in Rally jobs and fail the jobs if thresholds are not met - The performance sub-team will Improve the tooling to measure performance - Slawek Kaplonski and Miguel Lavalle Slawek will work on a first version of measurements QoS Topics ======== * In the QoS session, the following RFEs and bugs were discussed: - [RFE] Does not support shared N-S qos per-tenant, https://bugs.launchpad.net/neutron/+bug/1787793. A technical solution based on TC has already been proposed in the RFE itself. The team supported the implementation of the proposed functionality and the next step is to draft a spec that will clarify how the feature will be handled from the API point of view - [RFE] Does not support ipv6 N-S qos, https://bugs.launchpad.net/neutron/+bug/1787792. The team also supported the implementation of this feature, with the next steps being: 1) investigate how we can use the Common Classifier Framework to associate QoS rules with some classifiers and apply it then only for specific class of traffic - then it will be able to use it for BW limit on FIP and on L2 port levels as well as for e.g. DSCP marking rules and other things, 2) Make some PoC of such solution or maybe find if there are other ways to do something like that, 3) In case of VPN QoS we will also have to implement some support for classful bw limits in tc driver so it may be reused in case of this RFE also - Instances miss neutron QoS on their ports after unrescue and soft reboot, https://bugs.launchpad.net/neutron/+bug/1784006. Miguel Lavalle will investigate this bug, although submitter indicated that it doesn't affect Nova anymore CI stability - some jobs are not stable and require multiple rechecks ================================================= * Slawek Kaplonski shared with the team on Friday morning a list of tests and jobs that are unstable. The team decided to distribute those tests and jobs to work on fixes. This is the list and the comments that came back from the team: - https://bugs.launchpad.net/neutron/+bug/1717302. Problem was traced back to a wrong version of Keepalived. It is fixed in the master and Rocky branches. May need a a fix in the Queens branch - https://bugs.launchpad.net/neutron/+bug/1789434. Taken over by Manjeet Singh Bhatia - https://bugs.launchpad.net/neutron/+bug/1766701. Fixed with https://review.openstack.org/#/c/602696 - https://bugs.launchpad.net/neutron/+bug/1779075. This bug was taken over by Miguel Lavalle and will be addressed as part of the new performance sub-team work mentioned above - https://bugs.launchpad.net/neutron/+bug/1726462. This bug doesn't seem to be caused by Neutron. Miguel Lavalle will ping the Cinder team about it - https://bugs.launchpad.net/neutron/+bug/1779077. We haven't hit this bug lately, according to logstash query in the bug. Miguel Lavalle will watch it over the next few days - https://bugs.launchpad.net/neutron/+bug/1779328. Needs an owner - https://bugs.launchpad.net/neutron/+bug/1687027 and https://bugs.launchpad.net/neutron/+bug/1784836. These bugs are related to DB migrations and are being investigated by Miguel Lavalle - https://bugs.launchpad.net/neutron/+bug/1791989. Taken over by Slawek Kaplonski - https://bugs.launchpad.net/neutron/+bug/1779801. After analysis was done, it was marked as invalid FWaaS ===== * Announce removal of FWaaS V1 in Stein - German Eichberger will send a message to the mailing list making the announcement: http://lists.openstack.org/pipermail/openstack-dev/2018-September/134864.html * Miguel Lavalle and Hongbin Lu presented four specs that Huawei proposes to expand the FWaaS API. The specs were well received by the team, and the next step is to provide detailed feedback in Gerrit. These specs, along with the overall feedback provided during the session, are: - Add support for dynamic rules: https://review.openstack.org/#/c/597724/ Will review more and get feedback from cores and Neutron extension with tagging - Extend firewall group inclusion: https://review.openstack.org/#/c/600261/, 'firewall_groups' means remote_group_id for SG. Will comment about remote_fwg_id. - Introduce action 'redirect': https://review.openstack.org/#/c/600563/ Consider how to specify target(e.g. 3rd-party DPI) and some constraint in case of 'redirect' - Add support for priority: https://review.openstack.org/#/c/600870/ Sync up the use-case for using multiple firewall groups on a port * Other specs that were considered during the meeting by the team: - Firewall audit notification: https://review.openstack.org/461657. Need to reach out to Zhaobo to synch-up - Firewall Rule Scheduling: https://review.openstack.org/236840. Need to reach out to Zhaobo to synch-up * German Eichberger suggested evaluating the possibility of deprecating security groups in Neutron and replace them with FWaaS V2.0 - The consensus was that that decision cannot be made without a wide consensus from the community on whether this is desirable / feasible or not - German Eichberger will send a message to the ML to gather feedback from the community * Other topics discussed during the meeting: - Rework Firewall Group status: Sridar Kandaswami will start work on this - Tempest tests, including scenario: Will be implemented in this cycle - Address group support: implementation work will continue in Stein - Zuul v3 testing: Nate Johnston is working on this - Scoping of L4-L7 support (WIP spec: https://review.openstack.org/#/c/600714/): any protocol for L7 filtering by eBPF - Common Classifier Framework: requires more investigation. Will synch up with the CCF team - Horizon Technical Debt: Will synch up with Akihiro Motoki and Yushiro Furukawa will learn AngularJS and Django - Remote firewall group support: German Eichberger will continue working on this during Steinberger German EichbergerGerman EichbergerGerman Eichberger -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Sep 24 21:47:23 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 24 Sep 2018 16:47:23 -0500 Subject: [openstack-dev] [penstack-dev]Discussion about the future of OpenStack in China In-Reply-To: <65bb8c01-dda8-601e-786e-9a998a99ddeb@gmail.com> References: <65bb8c01-dda8-601e-786e-9a998a99ddeb@gmail.com> Message-ID: <1df630d7-c131-ec4a-92d5-8cc6c562fbc3@gmail.com> On 9/24/2018 12:12 PM, Jay Pipes wrote: > There were a couple points that I did manage to decipher, though. One > thing that both articles seemed to say was that OpenStack doesn't meet > public (AWS-ish) cloud use cases and OpenStack doesn't compare favorably > to VMWare either. Yeah I picked up on that also - trying to be all things to all people means we do less well at any single thing. No surprises there. > > Is there a large contingent of Chinese OpenStack users that expect > OpenStack to be a free (as in beer) version of VMware technology? > > What are the 3 most important features that Chinese OpenStack users > would like to see included in OpenStack projects? Yeah I picked up on a few things as well. The article was talking about gaps in upstream services: a) they did a bunch of work on trove for their dbaas solution, but did they contribute any of that work? b) they mentioned a lack of DRS and HA support, but didn't mention the Watcher or Masakari projects - maybe they didn't know they exist? -- Thanks, Matt From doug at doughellmann.com Mon Sep 24 22:35:04 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 24 Sep 2018 18:35:04 -0400 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> Message-ID: Ben Nemec writes: > Hi, > > This came up in the Oslo meeting as a result of my initial look at the > test Storyboard import. It appears all of the priority data from > launchpad gets lost in the migration, which is going to make organizing > hundreds of bugs somewhat difficult. I'm particularly not fond of it > after spending last cycle whittling down our untriaged bug list. :-) > > Work lists and tags were mentioned as possible priority management tools > in Storyboard, so is there some way to migrate launchpad priorities into > one of those automatically? If not, are there any plans to add that? > > It sounded like a similar conversation is happening with a few other > teams so we wanted to bring the discussion to the mailing list for > visibility. > > Thanks. > > -Ben At the PTG there was feedback from at least one team that consumers of the data in storyboard want a priority setting on each story. Historically the response to that has been that different users will have different priorities, so each of them using worklists is the best way to support those differences of opinion. I think we need to reconsider that position if it's going to block adoption. I think Ben's case is an excellent second example of where having a field to hold some sort of priority value would be useful. Doug From fungi at yuggoth.org Mon Sep 24 22:47:34 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 24 Sep 2018 22:47:34 +0000 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> Message-ID: <20180924224734.jdtp5mijvzrpxoda@yuggoth.org> On 2018-09-24 18:35:04 -0400 (-0400), Doug Hellmann wrote: [...] > At the PTG there was feedback from at least one team that > consumers of the data in storyboard want a priority setting on > each story. Historically the response to that has been that > different users will have different priorities, so each of them > using worklists is the best way to support those differences of > opinion. > > I think we need to reconsider that position if it's going to block > adoption. I think Ben's case is an excellent second example of > where having a field to hold some sort of priority value would be > useful. The alternative suggestion, for teams who want to be able to flag some sort of bucketed priorities, is to use story tags. We could even improve the import tool to accept some sort of priority-to-tag-name mapping so that, say, when we import bugs for Oslo their oslo-critical tag is applied to any with a critical bugtask, oslo-medium to any with medium priority tasks, and so on. Not all teams using StoryBoard will want to have a bucketed priority scheme, and those who do won't necessarily want to use the same number or kinds of priorities. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zhipengh512 at gmail.com Mon Sep 24 23:25:52 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 25 Sep 2018 07:25:52 +0800 Subject: [openstack-dev] [penstack-dev]Discussion about the future of OpenStack in China In-Reply-To: <1df630d7-c131-ec4a-92d5-8cc6c562fbc3@gmail.com> References: <65bb8c01-dda8-601e-786e-9a998a99ddeb@gmail.com> <1df630d7-c131-ec4a-92d5-8cc6c562fbc3@gmail.com> Message-ID: watcher is known by some, but I don't think Masakari is On Tue, Sep 25, 2018 at 5:47 AM Matt Riedemann wrote: > On 9/24/2018 12:12 PM, Jay Pipes wrote: > > There were a couple points that I did manage to decipher, though. One > > thing that both articles seemed to say was that OpenStack doesn't meet > > public (AWS-ish) cloud use cases and OpenStack doesn't compare favorably > > to VMWare either. > > Yeah I picked up on that also - trying to be all things to all people > means we do less well at any single thing. No surprises there. > > > > > Is there a large contingent of Chinese OpenStack users that expect > > OpenStack to be a free (as in beer) version of VMware technology? > > > > What are the 3 most important features that Chinese OpenStack users > > would like to see included in OpenStack projects? > > Yeah I picked up on a few things as well. The article was talking about > gaps in upstream services: > > a) they did a bunch of work on trove for their dbaas solution, but did > they contribute any of that work? > > b) they mentioned a lack of DRS and HA support, but didn't mention the > Watcher or Masakari projects - maybe they didn't know they exist? > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Sep 24 23:31:17 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 24 Sep 2018 19:31:17 -0400 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: <20180924224734.jdtp5mijvzrpxoda@yuggoth.org> References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> <20180924224734.jdtp5mijvzrpxoda@yuggoth.org> Message-ID: Jeremy Stanley writes: > On 2018-09-24 18:35:04 -0400 (-0400), Doug Hellmann wrote: > [...] >> At the PTG there was feedback from at least one team that >> consumers of the data in storyboard want a priority setting on >> each story. Historically the response to that has been that >> different users will have different priorities, so each of them >> using worklists is the best way to support those differences of >> opinion. >> >> I think we need to reconsider that position if it's going to block >> adoption. I think Ben's case is an excellent second example of >> where having a field to hold some sort of priority value would be >> useful. > > The alternative suggestion, for teams who want to be able to flag > some sort of bucketed priorities, is to use story tags. We could > even improve the import tool to accept some sort of > priority-to-tag-name mapping so that, say, when we import bugs for > Oslo their oslo-critical tag is applied to any with a critical > bugtask, oslo-medium to any with medium priority tasks, and so on. > Not all teams using StoryBoard will want to have a bucketed priority > scheme, and those who do won't necessarily want to use the same > number or kinds of priorities. That came up as a suggestion, too. The challenge there is that we cannot (as far as I know) sort on tags. So even if we have tags, we can't create a list of stories that is ordered automatically based on the priority. Maybe there's a solution to that? Doug From fungi at yuggoth.org Mon Sep 24 23:36:37 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 24 Sep 2018 23:36:37 +0000 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> <20180924224734.jdtp5mijvzrpxoda@yuggoth.org> Message-ID: <20180924233637.yu4gfokx6t3g5qva@yuggoth.org> On 2018-09-24 19:31:17 -0400 (-0400), Doug Hellmann wrote: > That came up as a suggestion, too. The challenge there is that we > cannot (as far as I know) sort on tags. So even if we have tags, > we can't create a list of stories that is ordered automatically > based on the priority. Maybe there's a solution to that? A board with automatic lanes for each priority tag? That way you can also have a lane for "stories with incomplete tasks for projects in my project-group but which haven't been prioritized yet" and be able to act on/triage them accordingly. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From duc.openstack at gmail.com Mon Sep 24 23:37:20 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Mon, 24 Sep 2018 16:37:20 -0700 Subject: [openstack-dev] [penstack-dev]Discussion about the future of OpenStack in China In-Reply-To: References: <65bb8c01-dda8-601e-786e-9a998a99ddeb@gmail.com> <1df630d7-c131-ec4a-92d5-8cc6c562fbc3@gmail.com> Message-ID: Senlin also supports the HA use case via its health policy. On Mon, Sep 24, 2018 at 4:26 PM Zhipeng Huang wrote: > > watcher is known by some, but I don't think Masakari is > > On Tue, Sep 25, 2018 at 5:47 AM Matt Riedemann wrote: >> >> On 9/24/2018 12:12 PM, Jay Pipes wrote: >> > There were a couple points that I did manage to decipher, though. One >> > thing that both articles seemed to say was that OpenStack doesn't meet >> > public (AWS-ish) cloud use cases and OpenStack doesn't compare favorably >> > to VMWare either. >> >> Yeah I picked up on that also - trying to be all things to all people >> means we do less well at any single thing. No surprises there. >> >> > >> > Is there a large contingent of Chinese OpenStack users that expect >> > OpenStack to be a free (as in beer) version of VMware technology? >> > >> > What are the 3 most important features that Chinese OpenStack users >> > would like to see included in OpenStack projects? >> >> Yeah I picked up on a few things as well. The article was talking about >> gaps in upstream services: >> >> a) they did a bunch of work on trove for their dbaas solution, but did >> they contribute any of that work? >> >> b) they mentioned a lack of DRS and HA support, but didn't mention the >> Watcher or Masakari projects - maybe they didn't know they exist? >> >> -- >> >> Thanks, >> >> Matt >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From no-reply at openstack.org Tue Sep 25 02:43:50 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Tue, 25 Sep 2018 02:43:50 -0000 Subject: [openstack-dev] tripleo-heat-templates 9.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for tripleo-heat-templates for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/tripleo-heat-templates/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/tripleo-heat-templates/log/?h=stable/rocky Release notes for tripleo-heat-templates can be found at: https://docs.openstack.org/releasenotes/tripleo-heat-templates/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/tripleo and tag it *rocky-rc-potential* to bring it to the tripleo-heat-templates release crew's attention. From no-reply at openstack.org Tue Sep 25 02:44:40 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Tue, 25 Sep 2018 02:44:40 -0000 Subject: [openstack-dev] tripleo-puppet-elements 9.0.0.0rc2 (rocky) Message-ID: Hello everyone, A new release candidate for tripleo-puppet-elements for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/tripleo-puppet-elements/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/tripleo-puppet-elements/log/?h=stable/rocky Release notes for tripleo-puppet-elements can be found at: https://docs.openstack.org/releasenotes/tripleo-puppet-elements/ From jichenjc at cn.ibm.com Tue Sep 25 05:51:30 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Tue, 25 Sep 2018 13:51:30 +0800 Subject: [openstack-dev] [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for'T' release In-Reply-To: <20180924132250.GW28120@paraplu> References: <20180924132250.GW28120@paraplu> Message-ID: >>>(a) KVM for IBM z Systems: John Garbutt pointed out[3] on IRC that: >>> "IBM announced that KVM for IBM z will be withdrawn, effective March >>> 31, 2018 [...] development will not only continue unaffected, but >>> the options for users grow, especially with the recent addition of >>> SuSE to the existing support in Ubuntu." >>> The message seems to be: "use a regular distribution". So this is >>> covered, if we a version based on other distributions. Yes, IBM don't have a product on s390x anymore per [3] indicated, we are cooperating with distro in enablement and for openstack, KVM on z has its own 3rd CI maintaining by us per [5] [5] http://ci-watch.tintri.com/project?project=nova (IBM zKVM CI ) Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Kashyap Chamarthy To: openstack-operators at lists.openstack.org, openstack-dev at lists.openstack.org Date: 09/24/2018 09:28 PM Subject: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for 'T' release Hey folks, Before we bump the agreed upon[1] minimum versions for libvirt and QEMU for 'Stein', we need to do the tedious work of picking the NEXT_MIN_* versions for the 'T' (which is still in the naming phase) release, which will come out in the autumn (Sep-Nov) of 2019. Proposal -------- Looking at the DistroSupportMatrix[2], it seems like we can pick the libvirt and QEMU versions supported by the next LTS release of Ubuntu -- 18.04; "Bionic", which are: libvirt: 4.0.0 QEMU: 2.11 Debian, Fedora, Ubuntu (Bionic), openSUSE currently already ship the above versions. And it seems reasonable to assume that the enterprise distribtions will also ship the said versions pretty soon; but let's double-confirm below. Considerations and open questions --------------------------------- (a) KVM for IBM z Systems: John Garbutt pointed out[3] on IRC that: "IBM announced that KVM for IBM z will be withdrawn, effective March 31, 2018 [...] development will not only continue unaffected, but the options for users grow, especially with the recent addition of SuSE to the existing support in Ubuntu." The message seems to be: "use a regular distribution". So this is covered, if we a version based on other distributions. (b) Oracle Linux: Can you please confirm if you'll be able to release libvirt and QEMU to 4.0.0 and 2.11, respectively? (c) SLES: Same question as above. Assuming Oracle Linux and SLES confirm, please let us know if there are any objections if we pick NEXT_MIN_* versions for the OpenStack 'T' release to be libvirt: 4.0.0 and QEMU: 2.11. * * * A refresher on libvirt and QEMU release schedules ------------------------------------------------- - There will be at least 12 libvirt releases (_excluding_ maintenance releases) by Autumn 2019. A new libvirt release comes out every month[4]. - And there will be about 4 releases of QEMU. A new QEMU release comes out once every four months. [1] http://git.openstack.org/cgit/openstack/nova/commit/?h=master&id=28d337b -- Pick next minimum libvirt / QEMU versions for "Stein" [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix [3] http://kvmonz.blogspot.com/2017/03/kvm-for-ibm-z-withdrawal.html [4] https://libvirt.org/downloads.html#schedule -- /kashyap _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From thierry at openstack.org Tue Sep 25 08:29:16 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 25 Sep 2018 10:29:16 +0200 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> Message-ID: <3138793d-f86e-2ea2-0b0d-959bcd6b88af@openstack.org> Doug Hellmann wrote: > I think we need to reconsider that position if it's going to block > adoption. I think Ben's case is an excellent second example of where > having a field to hold some sort of priority value would be useful. Absence of priorities was an initial design choice[1] based on the fact that in an open collaboration every group, team, organization has their own views on what the priority of a story is, so worklist and tags are better ways to capture that. Also they don't really work unless you triage everything. And then nobody really looks at them to prioritize their work, so they are high cost for little benefit. That said, it definitely creates friction, because alternatives are less convenient / visible, and it's not how other tools work... so the "right" answer here may not be the "best" answer. [1] https://wiki.openstack.org/wiki/StoryBoard/Priority -- Thierry Carrez (ttx) From adam at sotk.co.uk Tue Sep 25 08:47:43 2018 From: adam at sotk.co.uk (Adam Coldrick) Date: Tue, 25 Sep 2018 09:47:43 +0100 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: <20180924224734.jdtp5mijvzrpxoda@yuggoth.org> References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> <20180924224734.jdtp5mijvzrpxoda@yuggoth.org> Message-ID: <1537865263.2460.2.camel@sotk.co.uk> On Mon, 2018-09-24 at 22:47 +0000, Jeremy Stanley wrote: > On 2018-09-24 18:35:04 -0400 (-0400), Doug Hellmann wrote: > [...] > > At the PTG there was feedback from at least one team that > > consumers of the data in storyboard want a priority setting on > > each story. Historically the response to that has been that > > different users will have different priorities, so each of them > > using worklists is the best way to support those differences of > > opinion. > > > > I think we need to reconsider that position if it's going to block > > adoption. I think Ben's case is an excellent second example of > > where having a field to hold some sort of priority value would be > > useful. I'm strongly against reverting to having a single global priority value for things in StoryBoard, especially so for stories as opposed to tasks. Having a single global priority field for stories at best implies that a cross-project story has the same priority in each project, and at worst means cross-project discussions need to occur to find an agreement on an acceptable priority (or one affected project's opinion overrules the others, with the others' priorities becoming harder to understand at a glance or just completely lost). For tasks I am less concerned in that aspect since cross-project support isn't hurt, but remain of the opinion that a global field is the wrong approach since it means that only one person (or group of people) gets to visibly express their opinion on the priority of the task. Allowing multiple groups to express opinions on the priority of the same tasks allows situations where (to use a real world example I saw recently, but not in OpenStack) an upstream project marks a bug as medium priority for whatever reason, but a downstream user of that project is completely broken by that bug, meaning either providing a fix to it or persuading someone else to is of critical importance to them. With global priority there is a trade-off, either the bug tracker displays priorities with no reference as to who they are important to, downstream duplicate the issue elsewhere to track their priority, or their expression of how important the issue is is lost in a comment in order to maintain the state of "all priorities are determined by the core team". > The alternative suggestion, for teams who want to be able to flag > some sort of bucketed priorities, is to use story tags. We could > even improve the import tool to accept some sort of > priority-to-tag-name mapping so that, say, when we import bugs for > Oslo their oslo-critical tag is applied to any with a critical > bugtask, oslo-medium to any with medium priority tasks, and so on. > Not all teams using StoryBoard will want to have a bucketed priority > scheme, and those who do won't necessarily want to use the same > number or kinds of priorities. This approach will work fine, but as far as I can tell the only benefit of this rather than just creating say a board with a lane for each priority bucket is better discoverability when browsing stories. In my opinion that is a bug in our browse/search implementation, and the story list should be able to be filtered by worklist or board. Using this as a workaround is sensible, but I think I'd rather encourage the recommended workflow of tracking priority in ordered worklists or boards. In my eyes the correct solution to Ben's issue of losing all the priority information is to cause the migration scripts to create (or allow an existing board to be specified) a board with lanes representing the different Launchpad priorities used, and populate it accordingly. Its probably also worth noting that the database still tracks the deprecated task-level global priority, and the migration script imports priority data into that field, so it would be possible to write a script to interrogate the API and build such a board post-migration. See [0], [1], and [2] for example. I would strongly advise against actively using the existing global priority field for tracking priority though, since it is deprecated and the intent is for it to go away at some point. In general I think most of the complaints about complex priority vs global priority that I've seen can be reduced to issues with how the complex priority approach as implemented in StoryBoard makes it somewhat harder to actually discover what people think about the priority of things, and the inability to order search results by priority. I believe that can be solved by improving our implementation, rather than falling back to a flawed global priority approach. - Adam [0]: https://storyboard-dev.openstack.org/api/v1/tasks?project_id=574&prio rity=high [1]: https://storyboard-dev.openstack.org/api/v1/tasks?project_id=574&prio rity=medium [2]: https://storyboard-dev.openstack.org/api/v1/tasks?project_id=574&prio rity=low From shardy at redhat.com Tue Sep 25 08:51:23 2018 From: shardy at redhat.com (Steven Hardy) Date: Tue, 25 Sep 2018 09:51:23 +0100 Subject: [openstack-dev] [TripleO] Removing global bootstrap_nodeid? Message-ID: Hi all, After some discussions with bandini at the PTG, I've been taking a look at this bug and how to solve it: https://bugs.launchpad.net/tripleo/+bug/1792613 (Also more information in downstream bz1626140) The problem is that we always run container bootstrap tasks (as well as a bunch of update/upgrade tasks) on the bootstrap_nodeid, which by default is always the overcloud-controller-0 node (regardless of which services are running there). This breaks a pattern we established a while ago for Composable HA, where we' work out the bootstrap node by $service_short_bootstrap_hostname, which means we always run on the first node that has the service enabled (even if it spans more than one Role). This presents two problems: 1. service_config_settings only puts the per-service hieradata on nodes where a service is enabled, hence data needed for the bootstrapping (e.g keystone users etc) can be missing if, say, keystone is running on some role that's not Controller (this, I think is the root-cause of the bug/bz linked above) 2. Even when we by luck have the data needed to complete the bootstrap tasks, we'll end up pulling service containers to nodes where the service isn't running, potentially wasting both time and space. I've been looking at solutions, and it seems like we either have to generate per-service bootstrap_nodeid's (I have a patch to do this https://review.openstack.org/605010), or perhaps we could just remove all the bootstrap node id's, and switch to using hostnames instead? Seems like that could be simpler, but wanted to check if there's anything I'm missing? [root at overcloud-controller-0 ~]# ansible -m setup localhost | grep hostname [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' "ansible_hostname": "overcloud-controller-0", "facter_hostname": "overcloud-controller-0", [root at overcloud-controller-0 ~]# hiera -c /etc/puppet/hiera.yaml xinetd_short_bootstrap_node_name overcloud-controller-0 [root at overcloud-controller-0 ~]# hiera -c /etc/puppet/hiera.yaml xinetd_bootstrap_nodeid ede5f189-7149-4faf-a378-ac965a2a818c This is the first part of the problem, when we agree the approach here we can convert docker-puppet.py and all the *tasks to use the per-service IDs/names instead of the global one to work properly with composable roles/services. Any thoughts on this appreciated before I go ahead and implement the fix. Steve From ildiko.vancsa at gmail.com Tue Sep 25 09:23:13 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 25 Sep 2018 11:23:13 +0200 Subject: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions Message-ID: <5B518A5C-10FD-45D1-B462-E5A02DBBE2FE@gmail.com> Hi, Hereby I would like to give you a short summary on the discussions that happened at the PTG in the area of edge. The Edge Computing Group sessions took place on Tuesday where our main activity was to draw an overall architecture diagram to capture the basic setup and requirements of edge towards a set of OpenStack services. Our main and initial focus was around Keystone and Glance, but discussion with other project teams such as Nova, Ironic and Cinder also happened later during the week. The edge architecture diagrams we drew are part of a so called Minimum Viable Product (MVP) which refers to the minimalist nature of the setup where we didn’t try to cover all aspects but rather define a minimum set of services and requirements to get to a functional system. This architecture will evolve further as we collect more use cases and requirements. To describe edge use cases on a higher level with Mobile Edge as a use case in the background we identified three main building blocks: * Main or Regional Datacenter (DC) * Edge Sites * Far Edge Sites or Cloudlets We examined the architecture diagram with the following user stories in mind: * As a deployer of OpenStack I want to minimize the number of control planes I need to manage across a large geographical region. * As a user of OpenStack I expect instance autoscale continues to function in an edge site if connectivity is lost to the main datacenter. * As a deployer of OpenStack I want disk images to be pulled to a cluster on demand, without needing to sync every disk image everywhere. * As a user of OpenStack I want to manage all of my instances in a region (from regional DC to far edge cloudlets) via a single API endpoint. We concluded to talk about service requirements in two major categories: 1. The Edge sites are fully operational in case of a connection loss between the Regional DC and the Edge site which requires control plane services running on the Edge site 2. Having full control on the Edge site is not critical in case a connection loss between the Regional DC and an Edge site which can be satisfied by having the control plane services running only in the Regional DC In the first case the orchestration of the services becomes harder and is not necessarily solved yet, while in the second case you have centralized control but losing functionality on the Edge sites in the event of a connection loss. We did not discuss things such as HA at the PTG and we did not go into details on networking during the architectural discussion either. We agreed to prefer federation for Keystone and came up with two work items to cover missing functionality: * Keystone to trust a token from an ID Provider master and when the auth method is called, perform an idempotent creation of the user, project and role assignments according to the assertions made in the token * Keystone should support the creation of users and projects with predictable UUIDs (eg.: hash of the name of the users and projects). This greatly simplifies Image federation and telemetry gathering For Glance we explored image caching and spent some time discussing the option to also cache metadata so a user can boot new instances at the edge in case of a network connection loss which would result in being disconnected from the registry: * I as a user of Glance, want to upload an image in the main datacenter and boot that image in an edge datacenter. Fetch the image to the edge datacenter with its metadata We are still in the progress of documenting the discussions and draw the architecture diagrams and flows for Keystone and Glance. In addition to the above we went through Dublin PTG wiki (https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG) capturing requirements: * we agreed to consider the list of requirements on the wiki finalized for now * agreed to move there the additional requirements listed on the Use Cases (https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases) wiki page For the details on the discussions with related OpenStack projects you can check the following etherpads for notes: * Cinder: https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018 * Glance: https://etherpad.openstack.org/p/glance-stein-edge-architecture * Ironic: https://etherpad.openstack.org/p/ironic-stein-ptg-edge * Keystone: https://etherpad.openstack.org/p/keystone-stein-edge-architecture * Neutron: https://etherpad.openstack.org/p/neutron-stein-ptg * Nova: https://etherpad.openstack.org/p/nova-ptg-stein Notes from the StarlingX sessions: https://etherpad.openstack.org/p/stx-PTG-agenda We are still working on the MVP architecture to clean it up and discuss comments and questions before moving it to a wiki page. Please let me know if you would like to get access to the document and I will share it with you. Please let me know if you have any questions or comments to the above captured items. Thanks and Best Regards, Ildikó (IRC: ildikov) From yongle.li at gmail.com Tue Sep 25 09:27:41 2018 From: yongle.li at gmail.com (Fred Li) Date: Tue, 25 Sep 2018 17:27:41 +0800 Subject: [openstack-dev] [penstack-dev]Discussion about the future of OpenStack in China In-Reply-To: <65bb8c01-dda8-601e-786e-9a998a99ddeb@gmail.com> References: <65bb8c01-dda8-601e-786e-9a998a99ddeb@gmail.com> Message-ID: Hi Jay, Yes, I used google translator and modified few. I will talk with OpenStack operation manager in China to find a way to translate them. Your questions are for the first blog [1], I think. I share the blogs here because I think the authors may follow the mail list as well and hope they will reply. And if you have time, you can read [2] and others. Thanks On Tue, Sep 25, 2018 at 1:12 AM Jay Pipes wrote: > Fred, > > I had a hard time understanding the articles. I'm not sure if you used > Google Translate to do the translation from Chinese to English, but I > personally found both of them difficult to follow. > > There were a couple points that I did manage to decipher, though. One > thing that both articles seemed to say was that OpenStack doesn't meet > public (AWS-ish) cloud use cases and OpenStack doesn't compare favorably > to VMWare either. > > Is there a large contingent of Chinese OpenStack users that expect > OpenStack to be a free (as in beer) version of VMware technology? > > What are the 3 most important features that Chinese OpenStack users > would like to see included in OpenStack projects? > > Thanks, > -jay > > On 09/24/2018 11:10 AM, Fred Li wrote: > > Hi folks, > > > > Recently there are several blogs which discussed about the future of > > OpenStack. If I was not wrong, the first one is > > "OpenStack-8-year-itch"[1], and you can find its English version > > attached. Thanks to google translation. The second one is > > "5-years-my-opinion-on-OpenStack" [2] with English version attached as > > well. Please translate the 3 to 6 and read them if you are interested. > > > > I don't want to judge anything here. I just want to share as they are > > quite hot discussion and I think it is valuable for the whole community, > > not part of community to know. > > > > [1] https://mp.weixin.qq.com/s/GM5cMOl0q3hb_6_eEiixzA > > [2] https://mp.weixin.qq.com/s/qZkE4o_BHBPlbIjekjDRKw > > [3] https://mp.weixin.qq.com/s/svX4z3JM5ArQ57A1jFoyLw > > [4] https://mp.weixin.qq.com/s/Nyb0OxI2Z7LxDpofTTyWOg > > [5] https://mp.weixin.qq.com/s/5GV4i8kyedHSbCxCO1VBRw > > [6] https://mp.weixin.qq.com/s/yeBcMogumXKGQ0KyKrgbqA > > -- > > Regards > > Fred Li (李永乐) > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- Regards Fred Li (李永乐) -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Tue Sep 25 09:33:30 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Tue, 25 Sep 2018 11:33:30 +0200 Subject: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs In-Reply-To: <48f4ddf1-3d93-340e-3ad2-11bc4ef004ef@redhat.com> References: <1537790427.1265517.1518561608.5261E953@webmail.messagingengine.com> <48f4ddf1-3d93-340e-3ad2-11bc4ef004ef@redhat.com> Message-ID: <1537868010.2456380.1519770432.11C4E0C5@webmail.messagingengine.com> On Mon, Sep 24, 2018, at 8:40 PM, John Dennis wrote: > On 9/24/18 8:00 AM, Colleen Murphy wrote: > > This is in regard to https://launchpad.net/bugs/1641625 and the proposed patch https://review.openstack.org/588211 for it. Thanks Vishakha for getting the ball rolling. > > > > tl;dr: Keystone as an IdP should support sending non-strings/lists-of-strings as user attribute values, specifically lists of keystone groups, here's how that might happen. > > > > Problem statement: > > > > When keystone is set up as a service provider with an external non-keystone identity provider, it is common to configure the mapping rules to accept a list of group names from the IdP and map them to some property of a local keystone user, usually also a keystone group name. When keystone acts as the IdP, it's not currently possible to send a group name as a user property in the assertion. There are a few problems: > > > > 1. We haven't added any openstack_groups key in the creation of the SAML assertion (http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164). > > 2. If we did, this would not be enough. Unlike other IdPs, in keystone there can be multiple groups with the same name, namespaced by domain. So it's not enough for the SAML AttributeStatement to contain a semi-colon-separated list of group names, since a user could theoretically be a member of two or more groups with the same name. > > * Why can't we just send group IDs, which are unique? Because two different keystones are not going to have independent groups with the same UUID, so we cannot possibly map an ID of a group from keystone A to the ID of a different group in keystone B. We could map the ID of the group in in A to the name of a group in B but then operators need to create groups with UUIDs as names which is a little awkward for both the operator and the user who now is a member of groups with nondescriptive names. > > 3. If we then were able to encode a complex type like a group dict in a SAML assertion, we'd have to deal with it on the service provider side by being able to parse such an environment variable from the Apache headers. > > 4. The current mapping rules engine uses basic python string formatting to translate remote key-value pairs to local rules. We would need to change the mapping API to work with values more complex than strings and lists of strings. > > > > Possible solution: > > > > Vishakha's patch (https://review.openstack.org/588211) starts to solve (1) but it doesn't go far enough to solve (2-4). What we talked about at the PTG was: > > > > 2. Encode the group+domain as a string, for example by using the dict string repr or a string representation of some custom XML and maybe base64 encoding it. > > * It's not totally clear whether the AttributeValue class of the pysaml2 library supports any data types outside of the xmlns:xs namespace or whether nested XML is an option, so encoding the whole thing as an xs:string seems like the simplest solution. > > 3. The SP will have to be aware that openstack_groups is a special key that needs the encoding reversed. > > * I wrote down "MultiDict" in my notes but I don't recall exactly what format the environment variable would take that would make a MultiDict make sense here, in any case I think encoding the whole thing as a string eliminates the need for this. > > 4. We didn't talk about the mapping API, but here's what I think. If we were just talking about group names, the mapping API today would work like this (slight oversimplification for brevity): > > > > Given a list of openstack_groups like ["A", "B", "C"], it would work like this: > > > > [ > > { > > "local": > > [ > > { > > "group": > > { > > "name": "{0}", > > "domain": > > { > > "name": "federated_domain" > > } > > } > > } > > ], "remote": > > [ > > { > > "type": "openstack_groups" > > } > > ] > > } > > ] > > (paste in case the spacing makes this unreadable: http://paste.openstack.org/show/730623/ ) > > > > But now, we no longer have a list of strings but something more like [{"name": "A", "domain_name": "Default"} {"name": "B", "domain_name": "Default", "name": "A", "domain_name": "domainB"}]. Since {0} isn't a string, this example doesn't really work. Instead, let's assume that in step (3) we converted the decoded AttributeValue text to an object. Then the mapping could look more like this: > > > > [ > > { > > "local": > > [ > > { > > "group": > > { > > "name": "{0.name}", > > "domain": > > { > > "name": "{0.domain_name}" > > } > > } > > } > > ], "remote": > > [ > > { > > "type": "openstack_groups" > > } > > ] > > } > > ] > > (paste: http://paste.openstack.org/show/730622/ ) > > > > Alternatively, we could forget about the namespacing problem and simply say we only pass group names in the assertion, and if you have ambiguous group names you're on your own. We could also try to support both, e.g. have an openstack_groups mean a list of group names for simpler use cases, and openstack_groups_unique mean the list of encoded group+domain strings for advanced use cases. > > > > Finally, whatever we decide for groups we should also apply to openstack_roles which currently only supports global roles and not domain-specific roles. > > > > (It's also worth noting, for clarity, that the samlize function does handle namespaced projects, but this is because it's retrieving the project from the token and therefore there is only ever one project and one project domain so there is no ambiguity.) > > > > A few thoughts to help focus the discussion: > > * Namespacing is critical, no design should be permitted which allows > for ambiguous names. Ambiguous names are a security issue and can be > used by an attacker. The SAML designers recognized the importance to > disambiguate names. In SAML names are conveyed inside a NameIdentifier > element which (optionally) includes "name qualifier" attributes which in > SAML lingo is a namespace name. > > * SAML does not define the format of an attribute value. You can use > anything you want as long as it can be expressed in valid XML as long as > the cooperating parties know how to interpret the XML content. But > herein lies the problem. Very few SAML implementations know how to > consume an attribute value other than a string. In the real world, > despite what the SAML spec says is permitted is the constraint attribute > values is a string. > > * I haven't looked at the pysaml implementation but I'd be surprised if > it treated attribute values as anything other than a string. In theory > it could take any Python object (or JSON) and serialize it into XML but > you would still be stuck with the receiver being unable to parse the > attribute value (see above point). > > * You can encode complex data in an attribute value while only using a > simple string. The only requirement is the relying party knowing how to > interpret the string value. Note, this is distinctly different than > using non-string attribute values because of who is responsible for > parsing the value. If you use a non-string attribute value the SAML > library need to know how to parse it, none or very few will know how to > process that element. But if it's a string value the SAML library will > happily pass that string back up to the application who can then > interpret it. The easiest way to embed complex data in a string is with > JSON, we do it all the time, all over the place in OpenStack. [1][2] > > So my suggestion would be to give the attribute a meaningful name. > Define a JSON schema for the data and then let the upper layers decode > the JSON and operate on it. This is no different than any other SAML > attribute passed as a string, the receive MUST know how to interpret the > string value. > > [1] We already pass complex data in a SAML attribute string value. We > permit a comma separated list of group names to appear in the 'groups' > mapping rule (although I don't think this feature is documented in our > mapping rules documentation). The receiver (our mapping engine) has > hard-coded logic to look for a list of names. > > [2] We might want to prepend a format specifier to string containing > complex data, e.g. "JSON:{json object}". Our parser could then look for > a leading format tag and if if finds one strip it off and pass the rest > of the string into the proper parser. > > -- > John Dennis > Thanks for this response, John. I think serializing to JSON and prepending a format specifier makes sense. Colleen From zzxwill at gmail.com Tue Sep 25 12:57:11 2018 From: zzxwill at gmail.com (Will Zhou) Date: Tue, 25 Sep 2018 08:57:11 -0400 Subject: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core In-Reply-To: <0aa3ebd2-82d4-4a60-7162-c974c2d6449c@gmail.com> References: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> <4f413d36-463e-477a-9886-79bf55df677c@suse.com> <07fcbf71a9406e8d7b918b238377d503@arcor.de> <0aa3ebd2-82d4-4a60-7162-c974c2d6449c@gmail.com> Message-ID: +1 On Sat, Sep 22, 2018 at 10:33 PM Ian Y. Choi wrote: > Thanks a lot all for such nomination & agreement! > > I would like to do my best after I become doc-core as like what I > current do, > although I still need the help from so many kind, energetic, and > enthusiastic OpenStack contributors and core members > on OpenStack documentation and so many projects. > > > With many thanks, > > /Ian > > Melvin Hillsman wrote on 9/21/2018 5:31 AM: > > ++ > > > > On Thu, Sep 20, 2018 at 3:11 PM Frank Kloeker > > wrote: > > > > Am 2018-09-19 20:54, schrieb Andreas Jaeger: > > > On 2018-09-19 20:50, Petr Kovar wrote: > > >> Hi all, > > >> > > >> Based on our PTG discussion, I'd like to nominate Ian Y. Choi for > > >> membership in the openstack-doc-core team. I think Ian doesn't > > need an > > >> introduction, he's been around for a while, recently being deeply > > >> involved > > >> in infra work to get us robust support for project team docs > > >> translation and > > >> PDF builds. > > >> > > >> Having Ian on the core team will also strengthen our > > integration with > > >> the i18n community. > > >> > > >> Please let the ML know should you have any objections. > > > > > > The opposite ;), heartly agree with adding him, > > > > > > Andreas > > > > ++ > > > > Frank > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > < > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > -- > > Kind regards, > > > > Melvin Hillsman > > mrhillsman at gmail.com > > mobile: (832) 264-2646 <0832%20264%202646> > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- --------------------------------------------- ​ 周正喜 Mobile: 13701280947 ​WeChat: 472174291 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Tue Sep 25 12:58:56 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 25 Sep 2018 21:58:56 +0900 Subject: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core In-Reply-To: References: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> <4f413d36-463e-477a-9886-79bf55df677c@suse.com> <07fcbf71a9406e8d7b918b238377d503@arcor.de> <0aa3ebd2-82d4-4a60-7162-c974c2d6449c@gmail.com> Message-ID: +1 *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * On Tue, Sep 25, 2018 at 9:57 PM Will Zhou wrote: > +1 > > On Sat, Sep 22, 2018 at 10:33 PM Ian Y. Choi wrote: > >> Thanks a lot all for such nomination & agreement! >> >> I would like to do my best after I become doc-core as like what I >> current do, >> although I still need the help from so many kind, energetic, and >> enthusiastic OpenStack contributors and core members >> on OpenStack documentation and so many projects. >> >> >> With many thanks, >> >> /Ian >> >> Melvin Hillsman wrote on 9/21/2018 5:31 AM: >> > ++ >> > >> > On Thu, Sep 20, 2018 at 3:11 PM Frank Kloeker > > > wrote: >> > >> > Am 2018-09-19 20:54, schrieb Andreas Jaeger: >> > > On 2018-09-19 20:50, Petr Kovar wrote: >> > >> Hi all, >> > >> >> > >> Based on our PTG discussion, I'd like to nominate Ian Y. Choi for >> > >> membership in the openstack-doc-core team. I think Ian doesn't >> > need an >> > >> introduction, he's been around for a while, recently being deeply >> > >> involved >> > >> in infra work to get us robust support for project team docs >> > >> translation and >> > >> PDF builds. >> > >> >> > >> Having Ian on the core team will also strengthen our >> > integration with >> > >> the i18n community. >> > >> >> > >> Please let the ML know should you have any objections. >> > > >> > > The opposite ;), heartly agree with adding him, >> > > >> > > Andreas >> > >> > ++ >> > >> > Frank >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > < >> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > -- >> > Kind regards, >> > >> > Melvin Hillsman >> > mrhillsman at gmail.com >> > mobile: (832) 264-2646 <0832%20264%202646> >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- > > --------------------------------------------- > ​ > 周正喜 > Mobile: 13701280947 > ​WeChat: 472174291 > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Tue Sep 25 13:05:51 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Tue, 25 Sep 2018 15:05:51 +0200 Subject: [openstack-dev] [TripleO] Removing global bootstrap_nodeid? In-Reply-To: References: Message-ID: <45c73d3d-dda2-a548-5990-f34ecea9b706@redhat.com> Hi Steve, On 25/09/2018 10:51, Steven Hardy wrote: > Hi all, > > After some discussions with bandini at the PTG, I've been taking a > look at this bug and how to solve it: > > https://bugs.launchpad.net/tripleo/+bug/1792613 > (Also more information in downstream bz1626140) > > The problem is that we always run container bootstrap tasks (as well > as a bunch of update/upgrade tasks) on the bootstrap_nodeid, which by > default is always the overcloud-controller-0 node (regardless of which > services are running there). > > This breaks a pattern we established a while ago for Composable HA, > where we' work out the bootstrap node by > $service_short_bootstrap_hostname, which means we always run on the > first node that has the service enabled (even if it spans more than > one Role). > > This presents two problems: > > 1. service_config_settings only puts the per-service hieradata on > nodes where a service is enabled, hence data needed for the > bootstrapping (e.g keystone users etc) can be missing if, say, > keystone is running on some role that's not Controller (this, I think > is the root-cause of the bug/bz linked above) > > 2. Even when we by luck have the data needed to complete the bootstrap > tasks, we'll end up pulling service containers to nodes where the > service isn't running, potentially wasting both time and space. > > I've been looking at solutions, and it seems like we either have to > generate per-service bootstrap_nodeid's (I have a patch to do this > https://review.openstack.org/605010), or perhaps we could just remove > all the bootstrap node id's, and switch to using hostnames instead? > Seems like that could be simpler, but wanted to check if there's > anything I'm missing? I think we should recheck he initial assumptions, because based on my testing: * the bootstrap_nodeid is in fact a hostname already, despite its deceptive name, * it's not global, it is per-role. From my env: [root at overcloud-controller-2 ~]# hiera -c /etc/puppet/hiera.yaml bootstrap_nodeid overcloud-controller-0 [root at overcloud-novacompute-1 ~]# hiera -c /etc/puppet/hiera.yaml bootstrap_nodeid overcloud-novacompute-0 This makes me think the problems (1) and (2) as stated above shouldn't be happening. The containers or tasks present in service definition should be executed on all nodes where a service is present, and if they additionally filter for bootstrap_nodeid, it would only pick one node per role. So, the service *should* be deployed on the selected bootstrap node, which means the service-specific hiera should be present there and needless container downloading should not be happening, AFAICT. However, thinking about it this way, we probably have a different problem still: (3) The actions which use bootstrap_nodeid check are not guaranteed to execute once per service. In case the service is present on multiple roles, the bootstrap_nodeid check succeeds once per such role. Using per-service bootstrap host instead of per-role bootstrap host sounds like going the right way then. However, none of the above provides a solid explanation of what's really happening in the LP/BZ mentioned above. Hopefully this info is at least a piece of the puzzle. Jirka From john at johngarbutt.com Tue Sep 25 13:36:18 2018 From: john at johngarbutt.com (John Garbutt) Date: Tue, 25 Sep 2018 14:36:18 +0100 Subject: [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> Message-ID: On Thu, 20 Sep 2018 at 16:02, Matt Riedemann wrote: > On 9/20/2018 4:16 AM, John Garbutt wrote: > > Following on from the PTG discussions, I wanted to bring everyone's > > attention to Nova's plans to deprecate ComputeCapabilitiesFilter, > > including most of the the integration with Ironic Capabilities. > > > > To be specific, this is my proposal in code form: > > https://review.openstack.org/#/c/603102/ > > > > Once the code we propose to deprecate is removed we will stop using > > capabilities pushed up from Ironic for 'scheduling', but we would still > > pass capabilities request in the flavor down to Ironic (until we get > > some standard traits and/or deploy templates sorted for things like > UEFI). > > > > Functionally, we believe all use cases can be replaced by using the > > simpler placement traits (this is more efficient than post placement > > filtering done using capabilities): > > > https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/ironic-driver-traits.html > > > > Please note the recent addition of forbidden traits that helps improve > > the usefulness of the above approach: > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-forbidden-traits.html > > > > For example, a flavor request for GPUs >= 2 could be replaced by a > > custom trait trait that reports if a given Ironic node has > > CUSTOM_MORE_THAN_2_GPUS. That is a bad example (longer term we don't > > want to use traits for this, but that is a discussion for another day) > > but it is the example that keeps being raised in discussions on this > topic. > > > > The main reason for reaching out in this email is to ask if anyone has > > needs that the ResourceClass and Traits scheme does not currently > > address, or can think of a problem with a transition to the newer > approach. > > I left a few comments in the change, but I'm assuming as part of the > deprecation we'd remove the filter from the default enabled_filters list > so new installs don't automatically get warnings during scheduling? > +1 Good point, we totally need to do that. > Another thing is about existing flavors configured for these > capabilities-scoped specs. Are you saying during the deprecation we'd > continue to use those even if the filter is disabled? In the review I > had suggested that we add a pre-upgrade check which inspects the flavors > and if any of these are found, we report a warning meaning those flavors > need to be updated to use traits rather than capabilities. Would that be > reasonable? > I like the idea of a warning, but there are features that have not yet moved to traits: https://specs.openstack.org/openstack/ironic-specs/specs/juno-implemented/uefi-boot-for-ironic.html There is a more general plan that will help, but its not quite ready yet: https://review.openstack.org/#/c/504952/ As such, I think we can't get pull the plug on flavors including capabilities and passing them to Ironic, but (after a cycle of deprecation) I think we can now stop pushing capabilities from Ironic into Nova and using them for placement. Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From shardy at redhat.com Tue Sep 25 13:37:34 2018 From: shardy at redhat.com (Steven Hardy) Date: Tue, 25 Sep 2018 14:37:34 +0100 Subject: [openstack-dev] [TripleO] Removing global bootstrap_nodeid? In-Reply-To: <45c73d3d-dda2-a548-5990-f34ecea9b706@redhat.com> References: <45c73d3d-dda2-a548-5990-f34ecea9b706@redhat.com> Message-ID: On Tue, Sep 25, 2018 at 2:06 PM Jiří Stránský wrote: > > Hi Steve, Thanks for the reply - summary of our follow-up discussion on IRC below: > On 25/09/2018 10:51, Steven Hardy wrote: > > Hi all, > > > > After some discussions with bandini at the PTG, I've been taking a > > look at this bug and how to solve it: > > > > https://bugs.launchpad.net/tripleo/+bug/1792613 > > (Also more information in downstream bz1626140) > > > > The problem is that we always run container bootstrap tasks (as well > > as a bunch of update/upgrade tasks) on the bootstrap_nodeid, which by > > default is always the overcloud-controller-0 node (regardless of which > > services are running there). > > > > This breaks a pattern we established a while ago for Composable HA, > > where we' work out the bootstrap node by > > $service_short_bootstrap_hostname, which means we always run on the > > first node that has the service enabled (even if it spans more than > > one Role). > > > > This presents two problems: > > > > 1. service_config_settings only puts the per-service hieradata on > > nodes where a service is enabled, hence data needed for the > > bootstrapping (e.g keystone users etc) can be missing if, say, > > keystone is running on some role that's not Controller (this, I think > > is the root-cause of the bug/bz linked above) > > > > 2. Even when we by luck have the data needed to complete the bootstrap > > tasks, we'll end up pulling service containers to nodes where the > > service isn't running, potentially wasting both time and space. > > > > I've been looking at solutions, and it seems like we either have to > > generate per-service bootstrap_nodeid's (I have a patch to do this > > https://review.openstack.org/605010), or perhaps we could just remove > > all the bootstrap node id's, and switch to using hostnames instead? > > Seems like that could be simpler, but wanted to check if there's > > anything I'm missing? > > I think we should recheck he initial assumptions, because based on my > testing: > > * the bootstrap_nodeid is in fact a hostname already, despite its > deceptive name, > > * it's not global, it is per-role. > > From my env: > > [root at overcloud-controller-2 ~]# hiera -c /etc/puppet/hiera.yaml > bootstrap_nodeid > overcloud-controller-0 > > [root at overcloud-novacompute-1 ~]# hiera -c /etc/puppet/hiera.yaml > bootstrap_nodeid > overcloud-novacompute-0 > > This makes me think the problems (1) and (2) as stated above shouldn't > be happening. The containers or tasks present in service definition > should be executed on all nodes where a service is present, and if they > additionally filter for bootstrap_nodeid, it would only pick one node > per role. So, the service *should* be deployed on the selected bootstrap > node, which means the service-specific hiera should be present there and > needless container downloading should not be happening, AFAICT. Ah, I'd missed that because we have another different definition of the bootstrap node ID here: https://github.com/openstack/tripleo-heat-templates/blob/master/common/deploy-steps.j2#L283 That one is global, because it only considers primary_role_name, which I think explains the problem described in the LP/BZ. > However, thinking about it this way, we probably have a different > problem still: > > (3) The actions which use bootstrap_nodeid check are not guaranteed to > execute once per service. In case the service is present on multiple > roles, the bootstrap_nodeid check succeeds once per such role. > > Using per-service bootstrap host instead of per-role bootstrap host > sounds like going the right way then. Yeah it seems like both definitions of the bootstrap node described above are wrong in different ways ;) > However, none of the above provides a solid explanation of what's really > happening in the LP/BZ mentioned above. Hopefully this info is at least > a piece of the puzzle. Yup, thanks for working through it - as mentioned above I think the problem is the docker-puppet.py conditional that runs the bootstrap tasks uses the deploy-steps.j2 global bootstrap ID, so it can run on potentially the wrong role. Unless there's other ideas, I think this will be a multi-step fix: 1. Replace all t-h-t references for bootstrap_nodeid with per-service bootstrap node names (I pushed https://review.openstack.org/#/c/605046/ which may make this easier to do in the ansible tasks) 2. Ensure puppet-tripleo does the same 3. Rework docker-puppet.py so it can read all the per-service bootstrap nodes (and filter to only run when appropriate), or perhaps figure out a way to do the filtering in the ansible tasks before running docker-puppet.py 4. Remove all bootstrap_node* references from the tripleo-ansible-inventory code, to enforce using the per-service values Any other ideas or suggestions of things I've missed welcome! :) Steve From mark at stackhpc.com Tue Sep 25 13:39:31 2018 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 25 Sep 2018 14:39:31 +0100 Subject: [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> Message-ID: On Tue, 25 Sep 2018 at 14:37, John Garbutt wrote: > On Thu, 20 Sep 2018 at 16:02, Matt Riedemann wrote: > >> On 9/20/2018 4:16 AM, John Garbutt wrote: >> > Following on from the PTG discussions, I wanted to bring everyone's >> > attention to Nova's plans to deprecate ComputeCapabilitiesFilter, >> > including most of the the integration with Ironic Capabilities. >> > >> > To be specific, this is my proposal in code form: >> > https://review.openstack.org/#/c/603102/ >> > >> > Once the code we propose to deprecate is removed we will stop using >> > capabilities pushed up from Ironic for 'scheduling', but we would still >> > pass capabilities request in the flavor down to Ironic (until we get >> > some standard traits and/or deploy templates sorted for things like >> UEFI). >> > >> > Functionally, we believe all use cases can be replaced by using the >> > simpler placement traits (this is more efficient than post placement >> > filtering done using capabilities): >> > >> https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/ironic-driver-traits.html >> > >> > Please note the recent addition of forbidden traits that helps improve >> > the usefulness of the above approach: >> > >> https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-forbidden-traits.html >> > >> > For example, a flavor request for GPUs >= 2 could be replaced by a >> > custom trait trait that reports if a given Ironic node has >> > CUSTOM_MORE_THAN_2_GPUS. That is a bad example (longer term we don't >> > want to use traits for this, but that is a discussion for another day) >> > but it is the example that keeps being raised in discussions on this >> topic. >> > >> > The main reason for reaching out in this email is to ask if anyone has >> > needs that the ResourceClass and Traits scheme does not currently >> > address, or can think of a problem with a transition to the newer >> approach. >> >> I left a few comments in the change, but I'm assuming as part of the >> deprecation we'd remove the filter from the default enabled_filters list >> so new installs don't automatically get warnings during scheduling? >> > > +1 > Good point, we totally need to do that. > > >> Another thing is about existing flavors configured for these >> capabilities-scoped specs. Are you saying during the deprecation we'd >> continue to use those even if the filter is disabled? In the review I >> had suggested that we add a pre-upgrade check which inspects the flavors >> and if any of these are found, we report a warning meaning those flavors >> need to be updated to use traits rather than capabilities. Would that be >> reasonable? >> > > I like the idea of a warning, but there are features that have not yet > moved to traits: > > https://specs.openstack.org/openstack/ironic-specs/specs/juno-implemented/uefi-boot-for-ironic.html > > There is a more general plan that will help, but its not quite ready yet: > https://review.openstack.org/#/c/504952/ > > As such, I think we can't get pull the plug on flavors including > capabilities and passing them to Ironic, but (after a cycle of deprecation) > I think we can now stop pushing capabilities from Ironic into Nova and > using them for placement. > Aren't the two things interdependent? You need to be able to schedule to a node with the requested capability in order to use that capability to influence deployment. Mark > Thanks, > John > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clay.gerrard at gmail.com Tue Sep 25 13:54:46 2018 From: clay.gerrard at gmail.com (Clay Gerrard) Date: Tue, 25 Sep 2018 08:54:46 -0500 Subject: [openstack-dev] [Swift] Redhat Research Message-ID: In one of the Swift sessions at the Denver PTG Doug Hellman suggested there are some programs in RedHat that work with university graduate students on doing computer science research [1] which might be appropriate for certain kinds of work on some parts of OpenStack. Swift has solved a few interesting mathy sort problems over the years, we've got a few things still left to tackle. Coming up we've got container sharding concensus, LOSF slab file compaction, unified consistency engine RPC, and that troubling little golang fork/rewrite abandonware [2]. Probably others too. I field questions about how swift works 8-10 times a year from university students apparently doing some sort of analysis on Swift related to the course work... it never occurred to me I might be able to suggested something they might think on which would be useful? I don't really have the capacity or the know how to pursue it further than this, any one have any ideas or experience along these lines? -clayg 1. https://research.redhat.com/ 2. https://github.com/troubling/hummingbird -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Tue Sep 25 15:47:10 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Tue, 25 Sep 2018 17:47:10 +0200 Subject: [openstack-dev] [kolla] Proposing Chason Chan (chason) as kolla-ansible core Message-ID: Hi, I would like to propose Chason Chan to the kolla-ansible core team. Chason is been working on addition of Vitrage roles, rework VpnaaS service, maintaining documentation as well as fixing many bugs. Voting will be open for 14 days (until 9th of Oct). Kolla-ansible cores, please leave a vote. Consider this mail my +1 vote Regards, Eduardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From berendt at betacloud-solutions.de Tue Sep 25 15:57:55 2018 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Tue, 25 Sep 2018 17:57:55 +0200 Subject: [openstack-dev] [kolla] Proposing Chason Chan (chason) as kolla-ansible core In-Reply-To: References: Message-ID: +1 > On 25. Sep 2018, at 17:47, Eduardo Gonzalez wrote: > > Hi, > > I would like to propose Chason Chan to the kolla-ansible core team. > > Chason is been working on addition of Vitrage roles, rework VpnaaS service, maintaining > documentation as well as fixing many bugs. > > Voting will be open for 14 days (until 9th of Oct). > > Kolla-ansible cores, please leave a vote. > Consider this mail my +1 vote > > Regards, > Eduardo > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Christian Berendt Chief Executive Officer (CEO) Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 From mark at stackhpc.com Tue Sep 25 16:00:22 2018 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 25 Sep 2018 17:00:22 +0100 Subject: [openstack-dev] [kolla] Proposing Chason Chan (chason) as kolla-ansible core In-Reply-To: References: Message-ID: +1 On Tue, 25 Sep 2018 at 16:48, Eduardo Gonzalez wrote: > Hi, > > I would like to propose Chason Chan to the kolla-ansible core team. > > Chason is been working on addition of Vitrage roles, rework VpnaaS > service, maintaining > documentation as well as fixing many bugs. > > Voting will be open for 14 days (until 9th of Oct). > > Kolla-ansible cores, please leave a vote. > Consider this mail my +1 vote > > Regards, > Eduardo > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Sep 25 17:08:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 25 Sep 2018 12:08:03 -0500 Subject: [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> Message-ID: <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> On 9/25/2018 8:36 AM, John Garbutt wrote: > Another thing is about existing flavors configured for these > capabilities-scoped specs. Are you saying during the deprecation we'd > continue to use those even if the filter is disabled? In the review I > had suggested that we add a pre-upgrade check which inspects the > flavors > and if any of these are found, we report a warning meaning those > flavors > need to be updated to use traits rather than capabilities. Would > that be > reasonable? > > > I like the idea of a warning, but there are features that have not yet > moved to traits: > https://specs.openstack.org/openstack/ironic-specs/specs/juno-implemented/uefi-boot-for-ironic.html > > There is a more general plan that will help, but its not quite ready yet: > https://review.openstack.org/#/c/504952/ > > As such, I think we can't get pull the plug on flavors including > capabilities and passing them to Ironic, but (after a cycle of > deprecation) I think we can now stop pushing capabilities from Ironic > into Nova and using them for placement. Forgive my ignorance, but if traits are not on par with capabilities, why are we deprecating the capabilities filter? -- Thanks, Matt From doug at doughellmann.com Tue Sep 25 17:41:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 25 Sep 2018 13:41:26 -0400 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: <1537865263.2460.2.camel@sotk.co.uk> References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> <20180924224734.jdtp5mijvzrpxoda@yuggoth.org> <1537865263.2460.2.camel@sotk.co.uk> Message-ID: Adam Coldrick writes: > On Mon, 2018-09-24 at 22:47 +0000, Jeremy Stanley wrote: >> On 2018-09-24 18:35:04 -0400 (-0400), Doug Hellmann wrote: >> [...] >> > At the PTG there was feedback from at least one team that >> > consumers of the data in storyboard want a priority setting on >> > each story. Historically the response to that has been that >> > different users will have different priorities, so each of them >> > using worklists is the best way to support those differences of >> > opinion. >> > >> > I think we need to reconsider that position if it's going to block >> > adoption. I think Ben's case is an excellent second example of >> > where having a field to hold some sort of priority value would be >> > useful. > > I'm strongly against reverting to having a single global priority value > for things in StoryBoard, especially so for stories as opposed to tasks. > Having a single global priority field for stories at best implies that a > cross-project story has the same priority in each project, and at worst > means cross-project discussions need to occur to find an agreement on an > acceptable priority (or one affected project's opinion overrules the > others, with the others' priorities becoming harder to understand at a > glance or just completely lost). I would be fine with 1 field per task. > For tasks I am less concerned in that aspect since cross-project support > isn't hurt, but remain of the opinion that a global field is the wrong > approach since it means that only one person (or group of people) gets to > visibly express their opinion on the priority of the task. While I agree that not everyone attaches the same priority to a given task, and it's important for everyone to be able to have their own say in the relative importance of tasks/stories, I think it's more important than you're crediting for downstream consumers to have a consistent way to understand the priority attached by the person(s) doing the implementation work. > Allowing multiple groups to express opinions on the priority of the same > tasks allows situations where (to use a real world example I saw recently, > but not in OpenStack) an upstream project marks a bug as medium priority > for whatever reason, but a downstream user of that project is completely > broken by that bug, meaning either providing a fix to it or persuading > someone else to is of critical importance to them. This example is excellent, and I think it supports my position. An important area where using boards or worklists falls short of my own needs is that, as far as I know, it is not possible to subscribe to notifications for when a story or task is added to a list or board. So as a person who submits a story, I have no way of knowing when the team(s) working on it add it to (or remove it from) a priority list or change its priority by moving it to a different lane in a board. Communicating about what we're doing is as important as gathering and tracking the list of tasks in the first place. Without a notification that the priority of a story or task has been lowered, how would I know that I need to go try to persuade the team responsible to raise it back up? Even if we add (or there is) some way for me to receive a notification based on board or list membership, without a real priority field we have several different ways to express priority (different tag names, a single worklist that's kept in order, a board with separate columns for each status, etc.). That means each team could potentially use a different way, which in turn means downstream consumers have to discover, understand, and subscribe to all of those various ways, and use them correctly, for every team they are tracking. I think that's an unreasonable burden to place on someone who is not working in the community constantly, as is the case for many of our operators who report bugs. > With global priority there is a trade-off, either the bug tracker displays > priorities with no reference as to who they are important to, downstream > duplicate the issue elsewhere to track their priority, or their expression > of how important the issue is is lost in a comment in order to maintain > the state of "all priorities are determined by the core team". I suppose that depends on the reason we're using the task tracker. If we're just throwing data into it without trying to use it to communicate, then I can see us having lots of different views of priority with the same level of "official-ness". I don't think that's what we're doing though. I think we're trying to help teams track what they've committed to do and *communicate* those commitments to folks outside of the team. And from that perspective, the most important definition of "priority" is the one attached by the person(s) doing the work. That's not the same as saying no one else's opinion about priority matters, but it does ultimately come down someone actually doing one task before another. And I would like to be able to follow along when those people prioritize work on the bugs I file. Doug From kennelson11 at gmail.com Tue Sep 25 18:22:30 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 25 Sep 2018 11:22:30 -0700 Subject: [openstack-dev] [storyboard][oslo] Fewer stories than bugs? In-Reply-To: <61799e53-2fa6-40a7-ebbd-a1f3df624a8f@nemebean.com> References: <61799e53-2fa6-40a7-ebbd-a1f3df624a8f@nemebean.com> Message-ID: Hey Ben, I am looking into it! I am guessing that some of the discrepancy is bugs being filed after I did the migration. I might also have missed one of the launchpad projects. I will redo the migrations today and we can see if the numbers match up after (or are at least much closer). We've never had an issue with stories not being created and there were no errors in any of the runs I did of the migration scripts. I'm guessing PEBKAC :) -Kendall (diablo_rojo) On Mon, Sep 24, 2018 at 2:38 PM Ben Nemec wrote: > This is a more oslo-specific (maybe) question that came out of the test > migration. I noticed that launchpad is reporting 326 open bugs across > the Oslo projects, but in Storyboard there are only 266 stories created. > While I'm totally onboard with reducing our bug backlog, I'm curious why > that is the case. I'm speculating that maybe Launchpad counts bugs that > affect multiple Oslo projects as multiple bugs whereas Storyboard is > counting them as a single story? > > I think we were also going to skip > https://bugs.launchpad.net/openstack-infra which for some reason > appeared in the oslo group, but that's only two bugs so it doesn't > account for anywhere near the full difference. > > Mostly I just want to make sure we didn't miss something. I'm hoping > this is a known behavior and we don't have to start comparing bug lists > to find the difference. :-) > > Thanks. > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pc2929 at att.com Tue Sep 25 18:40:12 2018 From: pc2929 at att.com (CARVER, PAUL) Date: Tue, 25 Sep 2018 18:40:12 +0000 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> <20180924224734.jdtp5mijvzrpxoda@yuggoth.org> <1537865263.2460.2.camel@sotk.co.uk> Message-ID: Doug Hellmann wrote: >If we're just throwing data into it without trying to use it to communicate, then I can see us having lots of different views of priority with >the same level of "official-ness". I don't think that's what we're doing though. I think we're trying to help teams track what they've >committed to do and *communicate* those commitments to folks outside of the team. And from that perspective, the most important >definition of "priority" is the one attached by the person(s) doing the work. That's not the same as saying no one else's opinion about >priority matters, but it does ultimately come down someone actually doing one task before another. And I would like to be able to follow >along when those people prioritize work on the bugs I file. I agree. Different people certainly may prioritize the same thing differently, but there are far more consumers of software than there are producers and the most important thing a consumer wants to know (about a feature that they're eagerly awaiting) is what is the priority of that feature to whoever is doing the work of implementing it. There is certainly room for additional means of juggling and discussing/negotiating priorities in the stages before work really gets under way, but if it doesn't eventually become clear 1) who's doing the work 2) when are they targeting completion 3) what (if anything) is higher up on their todo list then it's impossible for anyone else to make any sort of plans that depend on that work. Plans could include figuring out how to add more resources or contingency plans. It's also possible that people or projects may develop a reputation for not delivering on their stated top priorities, but that's at least better than having no idea what the priorities are because every person and project is making up their own system for tracking it. From bodenvmw at gmail.com Tue Sep 25 19:55:09 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Tue, 25 Sep 2018 13:55:09 -0600 Subject: [openstack-dev] [neutron] Opting-in to receive neutron-lib consumption patches Message-ID: Please read on if your project uses neutron-lib. During the PTG [1] we decided that projects wanting to receive neutron-lib consumption patches [2] (for free) need to explicitly opt-in by adding the string "neutron-lib-current" to a comment in their requirements.txt. Based on the list of identified current networking projects [3], I've submitted an opt-in patch for each [4]. If your project isn't in [3][4], but you think it should be; it maybe missing a recent neutron-lib version in your requirements.txt. By landing the opt-in patch for your project's repo [4], you are effectively agreeing to: - Stay current with neutron, TC and infra initiatives thereby allowing consumption patches to work with the latest and greatest. - Help review and land consumption patches when they come into your review queue. As we've discussed before there's about a 2 week window to land such patches [5]. For project's that don't opt-in, you can continue to consume older versions of neutron-lib from PyPI manually updating your project to consume as you go. Thanks [1] https://etherpad.openstack.org/p/neutron-stein-ptg [2] https://docs.openstack.org/neutron-lib/latest/contributor/contributing.html#phase-4-consume [3] https://etherpad.openstack.org/p/neutron-sibling-setup [4] https://review.openstack.org/#/q/topic:neutron-lib-current-optin+(status:open+OR+status:merged) [5] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131841.html From openstack at nemebean.com Tue Sep 25 21:30:31 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 25 Sep 2018 16:30:31 -0500 Subject: [openstack-dev] [oslo][castellan] Time for a 1.0 release? Message-ID: <8bab2939-ae16-31f3-8191-2cb1e81bc9df@nemebean.com> Doug pointed out on a recent Oslo release review that castellan is still not officially 1.0. Given the age of the project and the fact that we're asking people to deploy a Castellan-compatible keystore as one of the base services, it's probably time to address that. To that end, I'm sending this to see if anyone is aware of any reasons we shouldn't go ahead and tag a 1.0 of Castellan. Thanks. -Ben From zhu.bingbing at 99cloud.net Wed Sep 26 01:30:21 2018 From: zhu.bingbing at 99cloud.net (zhubingbing) Date: Wed, 26 Sep 2018 09:30:21 +0800 (GMT+08:00) Subject: [openstack-dev] =?utf-8?q?=5Bkolla=5D_Proposing_Chason_Chan_=28ch?= =?utf-8?q?ason=29_as_kolla-ansible_core?= In-Reply-To: Message-ID: +1 At 2018-09-25 23:47:10, Eduardo Gonzalez wrote: Hi, I would like to propose Chason Chan to the kolla-ansible core team. Chason is been working on addition of Vitrage roles, rework VpnaaS service, maintaining documentation as well as fixing many bugs. Voting will be open for 14 days (until 9th of Oct). Kolla-ansible cores, please leave a vote. Consider this mail my +1 vote Regards, Eduardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From huifeng.le at intel.com Wed Sep 26 02:04:15 2018 From: huifeng.le at intel.com (Le, Huifeng) Date: Wed, 26 Sep 2018 02:04:15 +0000 Subject: [openstack-dev] [Neutron] Different behavior of admin_state_up attribute between trunk and port Message-ID: <76647BD697F40748B1FA4F56DA02AA0B4D50939E@SHSMSX104.ccr.corp.intel.com> Hi, When using neutron, it is found that the behavior of admin_state_up attribute is different between port and trunks which may impact user experience. e.g. for port, Setting admin_state_up = FALSE will set port status to "Down", then disable the port to block send/receive packages (e.g. set port's tap device state to down or move the port to DEAD_VLAN etc.); and for trunks, Setting the admin_state_up = FALSE locks the trunk in that it prevents operations such as adding/removing subports, but the trunk can still operational (e,g. the trunk sub-port can still send/receive packages), and. So does it make sense to make a change (e.g. submit a bug or RFE) which aligns the behavior between trunk and port (e.g. setting admin_state_up= FALSE on a trunk will disable that trunk on the vswitch - stop processing VLAN packets arriving from that VM instance) to avoid potential confusing of usage? Thanks much! Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From skramaja at redhat.com Wed Sep 26 04:57:20 2018 From: skramaja at redhat.com (Saravanan KR) Date: Wed, 26 Sep 2018 10:27:20 +0530 Subject: [openstack-dev] [Tripleo] Automating role generation In-Reply-To: References: <265867de-601f-6498-dc7f-4b50bf03904d@redhat.com> Message-ID: Mutually exclusive services should be decided based on the used environment files rather than by the list of the services on a role. For example, ComputeSriov role can be deployed with ml2-ovs or ml2-odl or ml2-ovn based on the environment file used for the deploy command. Looking forward. Regards, Saravanan KR On Sat, Sep 22, 2018 at 4:52 AM Janki Chhatbar wrote: > > Hi All > > As per the discussion at PTG, I have filed a BP [1]. I will push a spec sometime around mid-October. > > [1]. https://blueprints.launchpad.net/tripleo/+spec/automatic-role-generation > > On Tue, Sep 4, 2018 at 2:56 PM Steven Hardy wrote: >> >> On Tue, Sep 4, 2018 at 9:48 AM, Jiří Stránský wrote: >> > On 4.9.2018 08:13, Janki Chhatbar wrote: >> >> >> >> Hi >> >> >> >> I am looking to automate role file generation in TripleO. The idea is >> >> basically for an operator to create a simple yaml file (operator.yaml, >> >> say) >> >> listing services that are needed and then TripleO to generate >> >> Controller.yaml enabling only those services that are mentioned. >> >> >> >> For example: >> >> operator.yaml >> >> services: >> >> Glance >> >> OpenDaylight >> >> Neutron ovs agent >> > >> > >> > I'm not sure it's worth introducing a new file format as such, if the >> > purpose is essentially to expand e.g. "Glance" into >> > "OS::TripleO::Services::GlanceApi" and >> > "OS::TripleO::Services::GlanceRegistry"? It would be another layer of >> > indirection (additional mental work for the operator who wants to understand >> > how things work), while the layer doesn't make too much difference in >> > preparation of the role. At least that's my subjective view. >> > >> >> >> >> Then TripleO should >> >> 1. Fail because ODL and OVS agent are either-or services >> > >> > >> > +1 i think having something like this would be useful. >> > >> >> 2. After operator.yaml is modified to remove Neutron ovs agent, it should >> >> generate Controller.yaml with below content >> >> >> >> ServicesDefault: >> >> - OS::TripleO::Services::GlanceApi >> >> - OS::TripleO::Services::GlanceRegistry >> >> - OS::TripleO::Services::OpenDaylightApi >> >> - OS::TripleO::Services::OpenDaylightOvs >> >> >> >> Currently, operator has to manually edit the role file (specially when >> >> deployed with ODL) and I have seen many instances of failing deployment >> >> due >> >> to variations of OVS, OVN and ODL services enabled when they are actually >> >> exclusive. >> > >> > >> > Having validations on the service list would be helpful IMO, e.g. "these >> > services must not be in one deployment together", "these services must not >> > be in one role together", "these services must be together", "we recommend >> > this service to be in every role" (i'm thinking TripleOPackages, Ntp, ...) >> > etc. But as mentioned above, i think it would be better if we worked >> > directly with the "OS::TripleO::Services..." values rather than a new layer >> > of proxy-values. >> > >> > Additional random related thoughts: >> > >> > * Operator should still be able to disobey what the validation suggests, if >> > they decide so. >> > >> > * Would be nice to have the info about particular services (e.g what can't >> > be together) specified declaratively somewhere (TripleO's favorite thing in >> > the world -- YAML?). >> > >> > * We could start with just one type of validation, e.g. the mutual >> > exclusivity rule for ODL vs. OVS, but would be nice to have the solution >> > easily expandable for new rule types. >> >> This is similar to how the UI uses the capabilities-map.yaml, so >> perhaps we can use that as the place to describe service dependencies >> and conflicts? >> >> https://github.com/openstack/tripleo-heat-templates/blob/master/capabilities-map.yaml >> >> Currently this isn't used at all for the CLI, but I can imagine some >> kind of wizard interface being useful, e.g you could say enable >> "Glance" group and it'd automatically pull in all glance dependencies? >> >> Another thing to mention is this doesn't necessarily have to generate >> a new role (although it could), the *Services parameter for existing >> roles can be overridden, so it might be simpler to generate an >> environment file instead. >> >> Steve >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Thanking you > > Janki Chhatbar > OpenStack | Docker | SDN > simplyexplainedblog.wordpress.com > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From chris.friesen at windriver.com Wed Sep 26 06:50:16 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 26 Sep 2018 00:50:16 -0600 Subject: [openstack-dev] [storyboard] why use different "bug" tags per project? Message-ID: Hi, At the PTG, it was suggested that each project should tag their bugs with "-bug" to avoid tags being "leaked" across projects, or something like that. Could someone elaborate on why this was recommended? It seems to me that it'd be better for all projects to just use the "bug" tag for consistency. If you want to get all bugs in a specific project it would be pretty easy to search for stories with a tag of "bug" and a project of "X". Chris From aschadin at sbcloud.ru Wed Sep 26 07:04:56 2018 From: aschadin at sbcloud.ru (=?utf-8?B?0KfQsNC00LjQvSDQkNC70LXQutGB0LDQvdC00YAg0KHQtdGA0LPQtdC10LI=?= =?utf-8?B?0LjRhw==?=) Date: Wed, 26 Sep 2018 07:04:56 +0000 Subject: [openstack-dev] [watcher] weekly meeting Message-ID: Greetings, We’ll have meeting today at 8:00 UTC on #openstack-meeting-3 channel. Best Regards, ____ Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Sep 26 08:23:32 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 26 Sep 2018 16:23:32 +0800 Subject: [openstack-dev] [cyborg]zero tolerance policy on padding activities Message-ID: Hi all, I want to emphasize the zero tolerance policy in cyborg project regarding padding activities. If you look at the gerrit record [0] you would probably got the idea: out of the 15 abandoned patches, only the ones from jiapei and shaohe were actually meant to do real fixing. We have #openstack-cyborg irc channel, community mailinglist as well as individual core member email you could reach out to. We also have setup wechat group for Chinese developers where the atmosphere is welcoming and funny gifs flies around all the time. There are more than enough measures to help you actually get involved with the project. Do the right thing. [0] https://review.openstack.org/#/q/status:abandoned+project:openstack/cyborg+label:Code-Review%253D-2 -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Wed Sep 26 09:02:28 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Wed, 26 Sep 2018 17:02:28 +0800 Subject: [openstack-dev] [kolla] Proposing Chason Chan (chason) as kolla-ansible core In-Reply-To: References: Message-ID: +1 good job On Wed, Sep 26, 2018 at 9:30 AM zhubingbing wrote: > +1 > > > > > > At 2018-09-25 23:47:10, Eduardo Gonzalez wrote: > > Hi, > > I would like to propose Chason Chan to the kolla-ansible core team. > > Chason is been working on addition of Vitrage roles, rework VpnaaS > service, maintaining > documentation as well as fixing many bugs. > > Voting will be open for 14 days (until 9th of Oct). > > Kolla-ansible cores, please leave a vote. > Consider this mail my +1 vote > > Regards, > Eduardo > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Wed Sep 26 09:10:28 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 26 Sep 2018 11:10:28 +0200 Subject: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions In-Reply-To: <5B518A5C-10FD-45D1-B462-E5A02DBBE2FE@gmail.com> References: <5B518A5C-10FD-45D1-B462-E5A02DBBE2FE@gmail.com> Message-ID: <1537953028.1215308.1521068936.71ABA285@webmail.messagingengine.com> Thanks for the summary, Ildiko. I have some questions inline. On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote: > > We agreed to prefer federation for Keystone and came up with two work > items to cover missing functionality: > > * Keystone to trust a token from an ID Provider master and when the auth > method is called, perform an idempotent creation of the user, project > and role assignments according to the assertions made in the token This sounds like it is based on the customizations done at Oath, which to my recollection did not use the actual federation implementation in keystone due to its reliance on Athenz (I think?) as an identity manager. Something similar can be accomplished in standard keystone with the mapping API in keystone which can cause dynamic generation of a shadow user, project and role assignments. > * Keystone should support the creation of users and projects with > predictable UUIDs (eg.: hash of the name of the users and projects). > This greatly simplifies Image federation and telemetry gathering I was in and out of the room and don't recall this discussion exactly. We have historically pushed back hard against allowing setting a project ID via the API, though I can see predictable-but-not-settable as less problematic. One of the use cases from the past was being able to use the same token in different regions, which is problematic from a security perspective. Is that that idea here? Or could someone provide more details on why this is needed? Were there any volunteers to help write up specs and work on the implementations in keystone? Colleen (cmurphy) From adam at sotk.co.uk Wed Sep 26 09:31:59 2018 From: adam at sotk.co.uk (Adam Coldrick) Date: Wed, 26 Sep 2018 10:31:59 +0100 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> <20180924224734.jdtp5mijvzrpxoda@yuggoth.org> <1537865263.2460.2.camel@sotk.co.uk> Message-ID: <1537954319.2460.7.camel@sotk.co.uk> On Tue, 2018-09-25 at 13:41 -0400, Doug Hellmann wrote: > Adam Coldrick writes: > > For tasks I am less concerned in that aspect since cross-project > > support > > isn't hurt, but remain of the opinion that a global field is the wrong > > approach since it means that only one person (or group of people) gets > > to > > visibly express their opinion on the priority of the task. > > While I agree that not everyone attaches the same priority to a given > task, and it's important for everyone to be able to have their own say > in the relative importance of tasks/stories, I think it's more important > than you're crediting for downstream consumers to have a consistent way > to understand the priority attached by the person(s) doing the > implementation work. I think you're right. The existing implementation hasn't really considered the case of a downstream consumer unused to the differences in StoryBoard just turning up at the task tracker to find out something like "what are the priorities of the oslo team?", and it shows in how undiscoverable to an outsider that is no matter which of the workflows we suggest is being used. > > Allowing multiple groups to express opinions on the priority of the > > same > > tasks allows situations where (to use a real world example I saw > > recently, > > but not in OpenStack) an upstream project marks a bug as medium > > priority > > for whatever reason, but a downstream user of that project is > > completely > > broken by that bug, meaning either providing a fix to it or persuading > > someone else to is of critical importance to them. > > This example is excellent, and I think it supports my position. I can see that, but I also don't think our positions are entirely incompatible. In the example, the downstream user was one of the main users of the upstream project and there was some contributor overlap. In my ideal world the downstream project would've expressed the priority in a worklist or board that upstream people were subscribed (or otherwise paying attention) to. Then, the upstream project would've set their priority in a board or worklist which the downstream folk also pay attention to somehow (since they are interested in how upstream are prioritising work). This way a contributor interested in the priorities of both projects could see the overlap, and perhaps use that to decide what to work on next. Also, since downstream have a way to pay attention to upstream's priority, they can see the low "official" priority and go and have any discussions needed. > An important area where using boards or worklists falls short of my own > needs is that, as far as I know, it is not possible to subscribe to > notifications for when a story or task is added to a list or board. So > as a person who submits a story, I have no way of knowing when the > team(s) working on it add it to (or remove it from) a priority list or > change its priority by moving it to a different lane in a board. > Communicating about what we're doing is as important as gathering and > tracking the list of tasks in the first place. Without a notification > that the priority of a story or task has been lowered, how would I know > that I need to go try to persuade the team responsible to raise it back > up? It is indeed not possible for the scenario I describe above to work neatly in StoryBoard today, because boards and worklists don't give notifications. That's because we've not got round to finishing that part of the implementation yet, rather than by design. Worklists do currently generate history of changes made to them, there is just no good way to see it anywhere and no notifications sent based on it. > Even if we add (or there is) some way for me to receive a notification > based on board or list membership, without a real priority field we have > several different ways to express priority (different tag names, a > single worklist that's kept in order, a board with separate columns for > each status, etc.). That means each team could potentially use a > different way, which in turn means downstream consumers have to > discover, understand, and subscribe to all of those various ways, and > use them correctly, for every team they are tracking. I think that's an > unreasonable burden to place on someone who is not working in the > community constantly, as is the case for many of our operators who > report bugs. This is the part I've not really considered before for StoryBoard. Perhaps we should've been defining an "official" prioritisation workflow and trying to make that discoverable. > > With global priority there is a trade-off, either the bug tracker > > displays > > priorities with no reference as to who they are important to, > > downstream > > duplicate the issue elsewhere to track their priority, or their > > expression > > of how important the issue is is lost in a comment in order to > > maintain > > the state of "all priorities are determined by the core team". > > I suppose that depends on the reason we're using the task tracker. > > If we're just throwing data into it without trying to use it to > communicate, then I can see us having lots of different views of > priority with the same level of "official-ness". I don't think that's > what we're doing though. I think we're trying to help teams track what > they've committed to do and *communicate* those commitments to folks > outside of the team. And from that perspective, the most important > definition of "priority" is the one attached by the person(s) doing the > work. That's not the same as saying no one else's opinion about priority > matters, but it does ultimately come down someone actually doing one > task before another. And I would like to be able to follow along when > those people prioritize work on the bugs I file. I agree that the actual most important opinion is the one held by the person actually doing the work. Historically we've avoided any ACLs on the story/task side of things to make things like "Alice sees a low-hanging- fruit task she can do that some people care about, assigns herself, and does the work". Its totally possible to just also add a priority field with a similar lack of ACLs, but if that priority is assumed to represent the core team's opinion then someone like Alice being paid to do a bunch of things upstream needs to track the priorities of the work she is doing elsewhere, even though she is the one actually doing the work. There are other benefits to complex/multi-dimensional priority over global priority fields too, e.g. it forces the person(s) doing the prioritisation to give consideration to the relative priority of tasks, by nature of being a collection of ordered lists. I've often seen projects using global priority end up with every single task marked "Important", to the point where new priority levels have been introduced once a clearly more important task was created. It feels to me like the real issue here is that StoryBoard doesn't communicate priority well at best, and not at all to a complete outsider. I feel that we can better solve that by improving how StoryBoard communicates the priorities people have expressed, than by adding a special priority field that only certain people are allowed to click on. To throw out some ideas, it seems to me that although we've avoided ACLs for things on the whole, we could utilise Teams better to reach some kind of solution here. I'm envisioning being able to link a Project to a Team, and then allowing the Team to define a Board which represents the "official" priorities. Those priorities could then be shown against Tasks which are in that Project. This approach keeps the benefits of using multi-dimensional priorities whilst also solving the problem that priority isn't easily visible to downstream consumers. It also retains the ability for contributors who are interested in the priorities expressed by other people/groups to pay attention to those lists too. It will enforce a more strictly defined workflow for priority too, I was imagining something along the lines of using boards with lanes to represent broad priority categories, with the cards in those lanes arranged in rough priority order. - Adam From yamamoto at midokura.com Wed Sep 26 09:55:59 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Wed, 26 Sep 2018 18:55:59 +0900 Subject: [openstack-dev] [taas] rocky Message-ID: hi, it seems we forgot to create rocky branch. i'll make a release and the branch sooner or later, unless someone beat me to do so. From sfinucan at redhat.com Wed Sep 26 10:41:30 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Wed, 26 Sep 2018 11:41:30 +0100 Subject: [openstack-dev] [all] Sphinx 'doctrees' bug recently resolved Message-ID: <97f9f830e9e303aa4ab65185b2b50f91c036224c.camel@redhat.com> FYI, Sphinx 1.8.0 contained a minor issue (introduced by yours truly) which resulted in an incorrect default doctree directory [1]. This would manifest itself in a 'docs/source/.doctree' directory being created and would only affects projects that call 'sphinx-build' from tox without specifying the '-d' parameter. This has been fixed in Sphinx 1.8.1. I don't think it's a significant enough issue to blacklist the 1.8.0 version and if you were seeing issues with this in recent weeks, the issue should now be resolved. You can still merge patches which explicitly set the '-d' parameter but they should be unnecessary now. Stephen [1] https://github.com/sphinx-doc/sphinx/issues/5418 From singh.surya64mnnit at gmail.com Wed Sep 26 10:44:49 2018 From: singh.surya64mnnit at gmail.com (Surya Singh) Date: Wed, 26 Sep 2018 16:14:49 +0530 Subject: [openstack-dev] [kolla] Proposing Chason Chan (chason) as kolla-ansible core In-Reply-To: References: Message-ID: +1 On Tue, Sep 25, 2018 at 9:17 PM Eduardo Gonzalez wrote: > Hi, > > I would like to propose Chason Chan to the kolla-ansible core team. > > Chason is been working on addition of Vitrage roles, rework VpnaaS > service, maintaining > documentation as well as fixing many bugs. > > Voting will be open for 14 days (until 9th of Oct). > > Kolla-ansible cores, please leave a vote. > Consider this mail my +1 vote > > Regards, > Eduardo > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vgvoleg at gmail.com Wed Sep 26 11:01:39 2018 From: vgvoleg at gmail.com (=?UTF-8?B?0J7Qu9C10LMg0J7QstGH0LDRgNGD0Lo=?=) Date: Wed, 26 Sep 2018 14:01:39 +0300 Subject: [openstack-dev] [mistral] Extend created(updated)_at by started(finished)_at to clarify the duration of the task Message-ID: Hi everyone! Please take a look to the blueprint that i've just created https://blueprints.launchpad.net/mistral/+spec/mistral-add-started-finished-at I'd like to implement this feature, also I want to update CloudFlow when this will be done. Please let me know in the blueprint if I can start implementing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Wed Sep 26 11:04:40 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 26 Sep 2018 13:04:40 +0200 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for 'T' release In-Reply-To: <20180924132250.GW28120@paraplu> References: <20180924132250.GW28120@paraplu> Message-ID: <1537959880.1243153.1521182032.2904E10B@webmail.messagingengine.com> On Mon, Sep 24, 2018, at 3:22 PM, Kashyap Chamarthy wrote: > Hey folks, > > Before we bump the agreed upon[1] minimum versions for libvirt and QEMU > for 'Stein', we need to do the tedious work of picking the NEXT_MIN_* > versions for the 'T' (which is still in the naming phase) release, which > will come out in the autumn (Sep-Nov) of 2019. > > Proposal > -------- > > Looking at the DistroSupportMatrix[2], it seems like we can pick the > libvirt and QEMU versions supported by the next LTS release of Ubuntu -- > 18.04; "Bionic", which are: > > libvirt: 4.0.0 > QEMU: 2.11 > > Debian, Fedora, Ubuntu (Bionic), openSUSE currently already ship the > above versions. And it seems reasonable to assume that the enterprise > distribtions will also ship the said versions pretty soon; but let's > double-confirm below. > > Considerations and open questions > --------------------------------- > > (a) KVM for IBM z Systems: John Garbutt pointed out[3] on IRC that: > "IBM announced that KVM for IBM z will be withdrawn, effective March > 31, 2018 [...] development will not only continue unaffected, but > the options for users grow, especially with the recent addition of > SuSE to the existing support in Ubuntu." > > The message seems to be: "use a regular distribution". So this is > covered, if we a version based on other distributions. > > (b) Oracle Linux: Can you please confirm if you'll be able to > release libvirt and QEMU to 4.0.0 and 2.11, respectively? > > (c) SLES: Same question as above. Already responded on IRC and on the patch, but to close the loop here: these should be fine for the next versions of SLES, thanks for checking. Colleen > > Assuming Oracle Linux and SLES confirm, please let us know if there are > any objections if we pick NEXT_MIN_* versions for the OpenStack 'T' > release to be libvirt: 4.0.0 and QEMU: 2.11. > > * * * > > A refresher on libvirt and QEMU release schedules > ------------------------------------------------- > > - There will be at least 12 libvirt releases (_excluding_ maintenance > releases) by Autumn 2019. A new libvirt release comes out every > month[4]. > > - And there will be about 4 releases of QEMU. A new QEMU release > comes out once every four months. > > [1] http://git.openstack.org/cgit/openstack/nova/commit/?h=master&id=28d337b > -- Pick next minimum libvirt / QEMU versions for "Stein" > [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix > [3] http://kvmonz.blogspot.com/2017/03/kvm-for-ibm-z-withdrawal.html > [4] https://libvirt.org/downloads.html#schedule > > -- > /kashyap > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Wed Sep 26 11:42:53 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 26 Sep 2018 07:42:53 -0400 Subject: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions In-Reply-To: <1537953028.1215308.1521068936.71ABA285@webmail.messagingengine.com> References: <5B518A5C-10FD-45D1-B462-E5A02DBBE2FE@gmail.com> <1537953028.1215308.1521068936.71ABA285@webmail.messagingengine.com> Message-ID: <7a50c32b-c88d-b362-95bf-bae2e534b91d@gmail.com> On 09/26/2018 05:10 AM, Colleen Murphy wrote: > Thanks for the summary, Ildiko. I have some questions inline. > > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote: > > > >> >> We agreed to prefer federation for Keystone and came up with two work >> items to cover missing functionality: >> >> * Keystone to trust a token from an ID Provider master and when the auth >> method is called, perform an idempotent creation of the user, project >> and role assignments according to the assertions made in the token > > This sounds like it is based on the customizations done at Oath, which to my recollection did not use the actual federation implementation in keystone due to its reliance on Athenz (I think?) as an identity manager. Something similar can be accomplished in standard keystone with the mapping API in keystone which can cause dynamic generation of a shadow user, project and role assignments. > >> * Keystone should support the creation of users and projects with >> predictable UUIDs (eg.: hash of the name of the users and projects). >> This greatly simplifies Image federation and telemetry gathering > > I was in and out of the room and don't recall this discussion exactly. We have historically pushed back hard against allowing setting a project ID via the API, though I can see predictable-but-not-settable as less problematic. One of the use cases from the past was being able to use the same token in different regions, which is problematic from a security perspective. Is that that idea here? Or could someone provide more details on why this is needed? Hi Colleen, I wasn't in the room for this conversation either, but I believe the "use case" wanted here is mostly a convenience one. If the edge deployment is composed of hundreds of small Keystone installations and you have a user (e.g. an NFV MANO user) which should have visibility across all of those Keystone installations, it becomes a hassle to need to remember (or in the case of headless users, store some lookup of) all the different tenant and user UUIDs for what is essentially the same user across all of those Keystone installations. I'd argue that as long as it's possible to create a Keystone tenant and user with a unique name within a deployment, and as long as it's possible to authenticate using the tenant and user *name* (i.e. not the UUID), then this isn't too big of a problem. However, I do know that a bunch of scripts and external tools rely on setting the tenant and/or user via the UUID values and not the names, so that might be where this feature request is coming from. Hope that makes sense? Best, -jay From adam at sotk.co.uk Wed Sep 26 12:17:03 2018 From: adam at sotk.co.uk (Adam Coldrick) Date: Wed, 26 Sep 2018 13:17:03 +0100 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> <20180924224734.jdtp5mijvzrpxoda@yuggoth.org> <1537865263.2460.2.camel@sotk.co.uk> Message-ID: <1537964223.2460.9.camel@sotk.co.uk> On Tue, 2018-09-25 at 18:40 +0000, CARVER, PAUL wrote: [...] > There is certainly room for additional means of juggling and > discussing/negotiating priorities in the stages before work really gets > under way, but if it doesn't eventually become clear > > 1) who's doing the work > 2) when are they targeting completion > 3) what (if anything) is higher up on their todo list Its entirely possible to track these three things in StoryBoard today, and for other people to view that information. 1) Task assignee, though this should be set when someone actually starts doing the work rather than being used to indicate "$person intends to do this at some point" 2) Due date on a card in a board 3) Lanes in that board ordered by priority The latter two assume that the person doing the work is using a board to track what they're doing, which is probably sensible behaviour we should encourage. Its admittedly difficult for downstream consumers to quickly find the board-related information, but I think that is a discoverability bug (it doesn't currently become clear where exactly these things can be found) rather than a fundamental issue which means we should just abandon the multi-dimensional approach. > then it's impossible for anyone else to make any sort of plans that > depend on that work. Plans could include figuring out how to add more > resources or contingency plans. It's also possible that people or > projects may develop a reputation for not delivering on their stated top > priorities, but that's at least better than having no idea what the > priorities are because every person and project is making up their own > system for tracking it. I would argue that someone who wants to make plans based on upstream work that may or may not get done should be taking the time (which should at worst be something like "reading the docs to find a link to a worklist/board" even with the current implementation) to understand how upstream are expressing the state of their work, though I can understand why they might not always want to. I definitely think that defining an "official"-ish approach is something that should probably be done, to reduce the cognitive load on newcomers. - Adam From jim at jimrollenhagen.com Wed Sep 26 12:31:04 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 26 Sep 2018 08:31:04 -0400 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: <1537964223.2460.9.camel@sotk.co.uk> References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> <20180924224734.jdtp5mijvzrpxoda@yuggoth.org> <1537865263.2460.2.camel@sotk.co.uk> <1537964223.2460.9.camel@sotk.co.uk> Message-ID: On Wed, Sep 26, 2018 at 8:17 AM Adam Coldrick wrote: > On Tue, 2018-09-25 at 18:40 +0000, CARVER, PAUL wrote: > [...] > > There is certainly room for additional means of juggling and > > discussing/negotiating priorities in the stages before work really gets > > under way, but if it doesn't eventually become clear > > > > 1) who's doing the work > > 2) when are they targeting completion > > 3) what (if anything) is higher up on their todo list > > Its entirely possible to track these three things in StoryBoard today, and > for other people to view that information. > > 1) Task assignee, though this should be set when someone actually starts > doing the work rather than being used to indicate "$person intends to do > this at some point" > 2) Due date on a card in a board > 3) Lanes in that board ordered by priority > I, for one, would not want to scroll through every lane on a large project's bug board to find the priority or target date for a given bug. For example, Nova has 819 open bugs right now. It would be a much better user experience to be able to open a specific bug and see the priority or target date. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Wed Sep 26 12:31:27 2018 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Wed, 26 Sep 2018 14:31:27 +0200 Subject: [openstack-dev] [kolla] Proposing Chason Chan (chason) as kolla-ansible core In-Reply-To: References: Message-ID: <5a9c8351-e39a-24a1-8f71-8e6a9af841fe@linaro.org> +1 From florian.engelmann at everyware.ch Wed Sep 26 12:31:57 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Wed, 26 Sep 2018 14:31:57 +0200 Subject: [openstack-dev] [kolla] ceph osd deploy fails Message-ID: Hi, I tried to deploy Rocky in a multinode setup but ceph-osd fails with: failed: [xxxxxxxxxxx-poc2] (item=[0, {u'fs_uuid': u'', u'bs_wal_label': u'', u'external_journal': False, u'bs_blk_label': u'', u'bs_db_partition_num': u'', u'journal_device': u'', u'journal': u'', u'partition': u'/dev/nvme0n1', u'bs_wal_partition_num': u'', u'fs_label': u'', u'journal_num': 0, u'bs_wal_device': u'', u'partition_num': u'1', u'bs_db_label': u'', u'bs_blk_partition_num': u'', u'device': u'/dev/nvme0n1', u'bs_db_device': u'', u'partition_label': u'KOLLA_CEPH_OSD_BOOTSTRAP_BS', u'bs_blk_device': u''}]) => { "changed": true, "item": [ 0, { "bs_blk_device": "", "bs_blk_label": "", "bs_blk_partition_num": "", "bs_db_device": "", "bs_db_label": "", "bs_db_partition_num": "", "bs_wal_device": "", "bs_wal_label": "", "bs_wal_partition_num": "", "device": "/dev/nvme0n1", "external_journal": false, "fs_label": "", "fs_uuid": "", "journal": "", "journal_device": "", "journal_num": 0, "partition": "/dev/nvme0n1", "partition_label": "KOLLA_CEPH_OSD_BOOTSTRAP_BS", "partition_num": "1" } ] } MSG: Container exited with non-zero return code 2 We tried to debug the error message by starting the container with a modified endpoint but we are stuck at the following point right now: docker run -e "HOSTNAME=10.0.153.11" -e "JOURNAL_DEV=" -e "JOURNAL_PARTITION=" -e "JOURNAL_PARTITION_NUM=0" -e "KOLLA_BOOTSTRAP=null" -e "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS" -e "KOLLA_SERVICE_NAME=bootstrap-osd-0" -e "OSD_BS_BLK_DEV=" -e "OSD_BS_BLK_LABEL=" -e "OSD_BS_BLK_PARTNUM=" -e "OSD_BS_DB_DEV=" -e "OSD_BS_DB_LABEL=" -e "OSD_BS_DB_PARTNUM=" -e "OSD_BS_DEV=/dev/nvme0n1" -e "OSD_BS_LABEL=KOLLA_CEPH_OSD_BOOTSTRAP_BS" -e "OSD_BS_PARTNUM=1" -e "OSD_BS_WAL_DEV=" -e "OSD_BS_WAL_LABEL=" -e "OSD_BS_WAL_PARTNUM=" -e "OSD_DEV=/dev/nvme0n1" -e "OSD_FILESYSTEM=xfs" -e "OSD_INITIAL_WEIGHT=1" -e "OSD_PARTITION=/dev/nvme0n1" -e "OSD_PARTITION_NUM=1" -e "OSD_STORETYPE=bluestore" -e "USE_EXTERNAL_JOURNAL=false" -v "/etc/kolla//ceph-osd/:/var/lib/kolla/config_files/:ro" -v "/etc/localtime:/etc/localtime:ro" -v "/dev/:/dev/" -v "kolla_logs:/var/log/kolla/" -ti --privileged=true --entrypoint /bin/bash 10.0.128.7:5000/openstack/openstack-kolla-cfg/ubuntu-source-ceph-osd:7.0.0.3 cat /var/lib/kolla/config_files/ceph.client.admin.keyring > /etc/ceph/ceph.client.admin.keyring cat /var/lib/kolla/config_files/ceph.conf > /etc/ceph/ceph.conf (bootstrap-osd-0)[root at 985e2dee22bc /]# /usr/bin/ceph-osd -d --public-addr 10.0.153.11 --cluster-addr 10.0.153.11 usage: ceph-osd -i [flags] --osd-data PATH data directory --osd-journal PATH journal file or block device --mkfs create a [new] data directory --mkkey generate a new secret key. This is normally used in combination with --mkfs --convert-filestore run any pending upgrade operations --flush-journal flush all data out of journal --mkjournal initialize a new journal --check-wants-journal check whether a journal is desired --check-allows-journal check whether a journal is allowed --check-needs-journal check whether a journal is required --debug_osd set debug level (e.g. 10) --get-device-fsid PATH get OSD fsid for the given block device --conf/-c FILE read configuration from the given configuration file --id/-i ID set ID portion of my name --name/-n TYPE.ID set name --cluster NAME set cluster name (default: ceph) --setuser USER set uid to user or uid (and gid to user's gid) --setgroup GROUP set gid to group or gid --version show version and quit -d run in foreground, log to stderr. -f run in foreground, log to usual location. --debug_ms N set message debug level (e.g. 1) 2018-09-26 12:28:07.801066 7fbda64b4e40 0 ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable), process (unknown), pid 46 2018-09-26 12:28:07.801078 7fbda64b4e40 -1 must specify '-i #' where # is the osd number But it looks like "-i" is not set anywere? grep command /opt/stack/kolla-ansible/ansible/roles/ceph/templates/ceph-osd.json.j2 "command": "/usr/bin/ceph-osd -f --public-addr {{ hostvars[inventory_hostname]['ansible_' + storage_interface]['ipv4']['address'] }} --cluster-addr {{ hostvars[inventory_hostname]['ansible_' + cluster_interface]['ipv4']['address'] }}", What's wrong with our setup? All the best, Flo -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From dabarren at gmail.com Wed Sep 26 12:44:51 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Wed, 26 Sep 2018 14:44:51 +0200 Subject: [openstack-dev] [kolla] ceph osd deploy fails In-Reply-To: References: Message-ID: Hi, what version of rocky are you using. Maybe was in the middle of a backport which temporally broke ceph. Could you try latest stable/rocky branch? It is now working properly. Regards On Wed, Sep 26, 2018, 2:32 PM Florian Engelmann < florian.engelmann at everyware.ch> wrote: > Hi, > > I tried to deploy Rocky in a multinode setup but ceph-osd fails with: > > > failed: [xxxxxxxxxxx-poc2] (item=[0, {u'fs_uuid': u'', u'bs_wal_label': > u'', u'external_journal': False, u'bs_blk_label': u'', > u'bs_db_partition_num': u'', u'journal_device': u'', u'journal': u'', > u'partition': u'/dev/nvme0n1', u'bs_wal_partition_num': u'', > u'fs_label': u'', u'journal_num': 0, u'bs_wal_device': u'', > u'partition_num': u'1', u'bs_db_label': u'', u'bs_blk_partition_num': > u'', u'device': u'/dev/nvme0n1', u'bs_db_device': u'', > u'partition_label': u'KOLLA_CEPH_OSD_BOOTSTRAP_BS', u'bs_blk_device': > u''}]) => { > "changed": true, > "item": [ > 0, > { > "bs_blk_device": "", > "bs_blk_label": "", > "bs_blk_partition_num": "", > "bs_db_device": "", > "bs_db_label": "", > "bs_db_partition_num": "", > "bs_wal_device": "", > "bs_wal_label": "", > "bs_wal_partition_num": "", > "device": "/dev/nvme0n1", > "external_journal": false, > "fs_label": "", > "fs_uuid": "", > "journal": "", > "journal_device": "", > "journal_num": 0, > "partition": "/dev/nvme0n1", > "partition_label": "KOLLA_CEPH_OSD_BOOTSTRAP_BS", > "partition_num": "1" > } > ] > } > > MSG: > > Container exited with non-zero return code 2 > > We tried to debug the error message by starting the container with a > modified endpoint but we are stuck at the following point right now: > > > docker run -e "HOSTNAME=10.0.153.11" -e "JOURNAL_DEV=" -e > "JOURNAL_PARTITION=" -e "JOURNAL_PARTITION_NUM=0" -e > "KOLLA_BOOTSTRAP=null" -e "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS" -e > "KOLLA_SERVICE_NAME=bootstrap-osd-0" -e "OSD_BS_BLK_DEV=" -e > "OSD_BS_BLK_LABEL=" -e "OSD_BS_BLK_PARTNUM=" -e "OSD_BS_DB_DEV=" -e > "OSD_BS_DB_LABEL=" -e "OSD_BS_DB_PARTNUM=" -e "OSD_BS_DEV=/dev/nvme0n1" > -e "OSD_BS_LABEL=KOLLA_CEPH_OSD_BOOTSTRAP_BS" -e "OSD_BS_PARTNUM=1" -e > "OSD_BS_WAL_DEV=" -e "OSD_BS_WAL_LABEL=" -e "OSD_BS_WAL_PARTNUM=" -e > "OSD_DEV=/dev/nvme0n1" -e "OSD_FILESYSTEM=xfs" -e "OSD_INITIAL_WEIGHT=1" > -e "OSD_PARTITION=/dev/nvme0n1" -e "OSD_PARTITION_NUM=1" -e > "OSD_STORETYPE=bluestore" -e "USE_EXTERNAL_JOURNAL=false" -v > "/etc/kolla//ceph-osd/:/var/lib/kolla/config_files/:ro" -v > "/etc/localtime:/etc/localtime:ro" -v "/dev/:/dev/" -v > "kolla_logs:/var/log/kolla/" -ti --privileged=true --entrypoint > /bin/bash > > 10.0.128.7:5000/openstack/openstack-kolla-cfg/ubuntu-source-ceph-osd:7.0.0.3 > > > > cat /var/lib/kolla/config_files/ceph.client.admin.keyring > > /etc/ceph/ceph.client.admin.keyring > > > cat /var/lib/kolla/config_files/ceph.conf > /etc/ceph/ceph.conf > > > (bootstrap-osd-0)[root at 985e2dee22bc /]# /usr/bin/ceph-osd -d > --public-addr 10.0.153.11 --cluster-addr 10.0.153.11 > usage: ceph-osd -i [flags] > --osd-data PATH data directory > --osd-journal PATH > journal file or block device > --mkfs create a [new] data directory > --mkkey generate a new secret key. This is normally used in > combination with --mkfs > --convert-filestore > run any pending upgrade operations > --flush-journal flush all data out of journal > --mkjournal initialize a new journal > --check-wants-journal > check whether a journal is desired > --check-allows-journal > check whether a journal is allowed > --check-needs-journal > check whether a journal is required > --debug_osd set debug level (e.g. 10) > --get-device-fsid PATH > get OSD fsid for the given block device > > --conf/-c FILE read configuration from the given configuration file > --id/-i ID set ID portion of my name > --name/-n TYPE.ID set name > --cluster NAME set cluster name (default: ceph) > --setuser USER set uid to user or uid (and gid to user's gid) > --setgroup GROUP set gid to group or gid > --version show version and quit > > -d run in foreground, log to stderr. > -f run in foreground, log to usual location. > --debug_ms N set message debug level (e.g. 1) > 2018-09-26 12:28:07.801066 7fbda64b4e40 0 ceph version 12.2.4 > (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable), process > (unknown), pid 46 > 2018-09-26 12:28:07.801078 7fbda64b4e40 -1 must specify '-i #' where # > is the osd number > > > But it looks like "-i" is not set anywere? > > grep command > /opt/stack/kolla-ansible/ansible/roles/ceph/templates/ceph-osd.json.j2 > "command": "/usr/bin/ceph-osd -f --public-addr {{ > hostvars[inventory_hostname]['ansible_' + > storage_interface]['ipv4']['address'] }} --cluster-addr {{ > hostvars[inventory_hostname]['ansible_' + > cluster_interface]['ipv4']['address'] }}", > > What's wrong with our setup? > > All the best, > Flo > > > -- > > EveryWare AG > Florian Engelmann > Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: mailto:florian.engelmann at everyware.ch > web: http://www.everyware.ch > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Sep 26 13:20:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 26 Sep 2018 13:20:26 +0000 Subject: [openstack-dev] [storyboard] why use different "bug" tags per project? In-Reply-To: References: Message-ID: <20180926132026.7pgkjmni6t7ttbvm@yuggoth.org> On 2018-09-26 00:50:16 -0600 (-0600), Chris Friesen wrote: > At the PTG, it was suggested that each project should tag their bugs with > "-bug" to avoid tags being "leaked" across projects, or something > like that. > > Could someone elaborate on why this was recommended? It seems to me that > it'd be better for all projects to just use the "bug" tag for consistency. > > If you want to get all bugs in a specific project it would be pretty easy to > search for stories with a tag of "bug" and a project of "X". Because stories are a cross-project concept and tags are applied to the story, it's possible for a story with tasks for both openstack/nova and openstack/cinder projects to represent a bug for one and a new feature for the other. If they're tagged nova-bug and cinder-feature then that would allow them to match the queries those teams have defined for their worklists, boards, et cetera. It's of course possible to just hand-wave that these intersections are rare enough to ignore and go ahead and use generic story tags, but the recommendation is there to allow teams to avoid disagreements in such cases. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From e0ne at e0ne.info Wed Sep 26 13:50:48 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 26 Sep 2018 16:50:48 +0300 Subject: [openstack-dev] [horizon] Horizon gates are broken In-Reply-To: References: Message-ID: Hi all, Patch [1] is merged and our gates are un-blocked now. I went throw review list and post 'recheck' where it was needed. We need to cherry-pick this fix to stable releases too. I'll do it asap Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Mon, Sep 24, 2018 at 11:18 AM Ivan Kolodyazhny wrote: > Hi team, > > Unfortunately, horizon gates are broken now. We can't merge any patch due > to the -1 from CI. > I don't want to disable tests now, that's why I proposed a fix [1]. > > We'd got released some of XStatic-* packages last week. At least new > XStatic-jQuery [2] breaks horizon [3]. I'm working on a new job for > requirements repo [4] to prevent such issues in the future. > > Please, do not try 'recheck' until [1] will be merged. > > [1] https://review.openstack.org/#/c/604611/ > [2] https://pypi.org/project/XStatic-jQuery/#history > [3] https://bugs.launchpad.net/horizon/+bug/1794028 > [4] https://review.openstack.org/#/c/604613/ > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Wed Sep 26 14:05:35 2018 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Wed, 26 Sep 2018 07:05:35 -0700 Subject: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions In-Reply-To: <7a50c32b-c88d-b362-95bf-bae2e534b91d@gmail.com> References: <5B518A5C-10FD-45D1-B462-E5A02DBBE2FE@gmail.com> <1537953028.1215308.1521068936.71ABA285@webmail.messagingengine.com> <7a50c32b-c88d-b362-95bf-bae2e534b91d@gmail.com> Message-ID: This discussion was also not about user assigned IDs, but predictable IDs with the auto provisioning. We still want it to be something keystone controls (locally). It might be hash domain ID and value from assertion ( similar.to the LDAP user ID generator). As long as within an environment, the IDs are predictable when auto provisioning via federation, we should be good. And the problem of the totally unknown ID until provisioning could be made less of an issue for someone working within a massively federated edge environment. I don't want user/explicit admin set IDs. On Wed, Sep 26, 2018, 04:43 Jay Pipes wrote: > On 09/26/2018 05:10 AM, Colleen Murphy wrote: > > Thanks for the summary, Ildiko. I have some questions inline. > > > > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote: > > > > > > > >> > >> We agreed to prefer federation for Keystone and came up with two work > >> items to cover missing functionality: > >> > >> * Keystone to trust a token from an ID Provider master and when the auth > >> method is called, perform an idempotent creation of the user, project > >> and role assignments according to the assertions made in the token > > > > This sounds like it is based on the customizations done at Oath, which > to my recollection did not use the actual federation implementation in > keystone due to its reliance on Athenz (I think?) as an identity manager. > Something similar can be accomplished in standard keystone with the mapping > API in keystone which can cause dynamic generation of a shadow user, > project and role assignments. > > > >> * Keystone should support the creation of users and projects with > >> predictable UUIDs (eg.: hash of the name of the users and projects). > >> This greatly simplifies Image federation and telemetry gathering > > > > I was in and out of the room and don't recall this discussion exactly. > We have historically pushed back hard against allowing setting a project ID > via the API, though I can see predictable-but-not-settable as less > problematic. One of the use cases from the past was being able to use the > same token in different regions, which is problematic from a security > perspective. Is that that idea here? Or could someone provide more details > on why this is needed? > > Hi Colleen, > > I wasn't in the room for this conversation either, but I believe the > "use case" wanted here is mostly a convenience one. If the edge > deployment is composed of hundreds of small Keystone installations and > you have a user (e.g. an NFV MANO user) which should have visibility > across all of those Keystone installations, it becomes a hassle to need > to remember (or in the case of headless users, store some lookup of) all > the different tenant and user UUIDs for what is essentially the same > user across all of those Keystone installations. > > I'd argue that as long as it's possible to create a Keystone tenant and > user with a unique name within a deployment, and as long as it's > possible to authenticate using the tenant and user *name* (i.e. not the > UUID), then this isn't too big of a problem. However, I do know that a > bunch of scripts and external tools rely on setting the tenant and/or > user via the UUID values and not the names, so that might be where this > feature request is coming from. > > Hope that makes sense? > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Sep 26 14:22:30 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 26 Sep 2018 09:22:30 -0500 Subject: [openstack-dev] [ptl][release] Proposed changes for cycle-with-milestones deliverables Message-ID: <20180926142229.GA26870@sm-workstation> During the Stein PTG in Denver, the release management team talked about ways we can make things simpler and reduce the "paper pushing" work that all teams need to do right now. One topic that came up was the usefulness of pushing tags around milestones during the cycle. There were a couple of needs identified for doing such "milestone releases": 1) It tests the release automation machinery to identify problems before the RC and final release crunch time. 2) It creates a nice cadence throughout the cycle to help teams stay on track and focus on the right things for each phase of the cycle. 3) It gives us an indication that teams are healthy, active, and planning to include their components in the final release. One of the big motivators in the past was also to have output that downstream distros and users could pick up for testing and early packaging. Based on our admittedly anecdotal small sample, it doesn't appear this is actually a big need, so we propose to stop tagging milestone releases for the cycle-with-milestone projects. We would still have "milestones" during the cycle to facilitate work organization and create a cadence: teams should still be aware of them, and we will continue to communicate those dates in the schedule and in the release countdown emails. But you would no longer be required to request a release for each milestone. Beta releases would be optional: if teams do want to have some beta version tags before the final release they can still request them - whether on one of the milestone dates, or whenever there is the need for the project. Release candidates would still require a tag. To facilitate that step and guarantee we have a release candidate for every deliverable, the release team proposes to automatically generate a release request early in the week of the RC deadline. That patch would be used as a base to communicate with the team: if a team wants to wait for a specific patch to make it to the RC, someone from the team can -1 the patch to have it held, or update that patch with a different commit SHA. If there are no issues, ideally we would want a +1 from the PTL and/or release liaison to indicate approval, but we would also consider no negative feedback as an indicator that the automatically proposed patches without a -1 can all be approved at the end of the RC deadline week. To cover point (3) above, and clearly know that a project is healthy and should be included in the coordinated release, we are thinking of requiring a person for each team to add their name to a "manifest" of sorts for the release cycle. That "final release liaison" person would be the designated person to follow through on finishing out the releases for that team, and would be designated ahead of the final release phases. With all these changes, we would rename the cycle-with-milestones release model to something like cycle-with-rc. FAQ: Q: Does this mean I don't need to pay attention to releases any more and the release team will just take care of everything? A: No. We still want teams engaged in the release cycle and would feel much more comfortable if we get an explicit +1 from the team on any proposed tags or releases. Q: Who should sign up to be the final release liaison ? A: Anyone in the team really. Could be the PTL, the standing release liaison, or someone else stepping up to cover that role. -- Thanks! The Release Team From fungi at yuggoth.org Wed Sep 26 14:36:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 26 Sep 2018 14:36:26 +0000 Subject: [openstack-dev] [ptl][release] Proposed changes for cycle-with-milestones deliverables In-Reply-To: <20180926142229.GA26870@sm-workstation> References: <20180926142229.GA26870@sm-workstation> Message-ID: <20180926143626.eohywqxdy6tkhfbc@yuggoth.org> On 2018-09-26 09:22:30 -0500 (-0500), Sean McGinnis wrote: [...] > It tests the release automation machinery to identify problems > before the RC and final release crunch time. [...] More to the point, it helped spot changes to projects which made it impossible to generate and publish their release artifacts. Coverage has improved for finding these issues before merging now, as well as in flight tests on proposed releases, making the risk lower than it used to be. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From miguel at mlavalle.com Wed Sep 26 14:39:23 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 26 Sep 2018 09:39:23 -0500 Subject: [openstack-dev] [taas] rocky In-Reply-To: References: Message-ID: Thanks Takashi On Wed, Sep 26, 2018 at 4:57 AM Takashi Yamamoto wrote: > hi, > > it seems we forgot to create rocky branch. > i'll make a release and the branch sooner or later, unless someone > beat me to do so. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Sep 26 14:45:21 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 26 Sep 2018 09:45:21 -0500 Subject: [openstack-dev] [storyboard] why use different "bug" tags per project? In-Reply-To: <20180926132026.7pgkjmni6t7ttbvm@yuggoth.org> References: <20180926132026.7pgkjmni6t7ttbvm@yuggoth.org> Message-ID: On 9/26/18 8:20 AM, Jeremy Stanley wrote: > On 2018-09-26 00:50:16 -0600 (-0600), Chris Friesen wrote: >> At the PTG, it was suggested that each project should tag their bugs with >> "-bug" to avoid tags being "leaked" across projects, or something >> like that. >> >> Could someone elaborate on why this was recommended? It seems to me that >> it'd be better for all projects to just use the "bug" tag for consistency. >> >> If you want to get all bugs in a specific project it would be pretty easy to >> search for stories with a tag of "bug" and a project of "X". > > Because stories are a cross-project concept and tags are applied to > the story, it's possible for a story with tasks for both > openstack/nova and openstack/cinder projects to represent a bug for > one and a new feature for the other. If they're tagged nova-bug and > cinder-feature then that would allow them to match the queries those > teams have defined for their worklists, boards, et cetera. It's of > course possible to just hand-wave that these intersections are rare > enough to ignore and go ahead and use generic story tags, but the > recommendation is there to allow teams to avoid disagreements in > such cases. Would it be possible to automate that tagging on import? Essentially tag every lp bug that is not wishlist with $PROJECT-bug and wishlists with $PROJECT-feature. Otherwise someone has to go through and re-categorize everything in Storyboard. I don't know if everyone would want that, but if this is the recommended practice I would want it for Oslo. From gfidente at redhat.com Wed Sep 26 15:02:50 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Wed, 26 Sep 2018 17:02:50 +0200 Subject: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions In-Reply-To: <5B518A5C-10FD-45D1-B462-E5A02DBBE2FE@gmail.com> References: <5B518A5C-10FD-45D1-B462-E5A02DBBE2FE@gmail.com> Message-ID: hi, thanks for sharing this! At TripleO we're looking at implementing in Stein deployment of at least 1 regional DC and N edge zones. More comments below. On 9/25/18 11:21 AM, Ildiko Vancsa wrote: > Hi, > > Hereby I would like to give you a short summary on the discussions that happened at the PTG in the area of edge. > > The Edge Computing Group sessions took place on Tuesday where our main activity was to draw an overall architecture diagram to capture the basic setup and requirements of edge towards a set of OpenStack services. Our main and initial focus was around Keystone and Glance, but discussion with other project teams such as Nova, Ironic and Cinder also happened later during the week. > > The edge architecture diagrams we drew are part of a so called Minimum Viable Product (MVP) which refers to the minimalist nature of the setup where we didn’t try to cover all aspects but rather define a minimum set of services and requirements to get to a functional system. This architecture will evolve further as we collect more use cases and requirements. > > To describe edge use cases on a higher level with Mobile Edge as a use case in the background we identified three main building blocks: > > * Main or Regional Datacenter (DC) > * Edge Sites > * Far Edge Sites or Cloudlets > > We examined the architecture diagram with the following user stories in mind: > > * As a deployer of OpenStack I want to minimize the number of control planes I need to manage across a large geographical region. > * As a user of OpenStack I expect instance autoscale continues to function in an edge site if connectivity is lost to the main datacenter. > * As a deployer of OpenStack I want disk images to be pulled to a cluster on demand, without needing to sync every disk image everywhere. > * As a user of OpenStack I want to manage all of my instances in a region (from regional DC to far edge cloudlets) via a single API endpoint. > > We concluded to talk about service requirements in two major categories: > > 1. The Edge sites are fully operational in case of a connection loss between the Regional DC and the Edge site which requires control plane services running on the Edge site > 2. Having full control on the Edge site is not critical in case a connection loss between the Regional DC and an Edge site which can be satisfied by having the control plane services running only in the Regional DC > > In the first case the orchestration of the services becomes harder and is not necessarily solved yet, while in the second case you have centralized control but losing functionality on the Edge sites in the event of a connection loss. > > We did not discuss things such as HA at the PTG and we did not go into details on networking during the architectural discussion either. while TripleO used to rely on pacemaker to manage cinder-volume A/P in the controlplane, we'd like to push for cinder-volume A/A in the edge zone and avoid the deployment of pacemaker in the edge zones the safety of cinder-volume A/A seems to depend mostly on the backend driver and for RBD we should be good > We agreed to prefer federation for Keystone and came up with two work items to cover missing functionality: > > * Keystone to trust a token from an ID Provider master and when the auth method is called, perform an idempotent creation of the user, project and role assignments according to the assertions made in the token > * Keystone should support the creation of users and projects with predictable UUIDs (eg.: hash of the name of the users and projects). This greatly simplifies Image federation and telemetry gathering > > For Glance we explored image caching and spent some time discussing the option to also cache metadata so a user can boot new instances at the edge in case of a network connection loss which would result in being disconnected from the registry: > > * I as a user of Glance, want to upload an image in the main datacenter and boot that image in an edge datacenter. Fetch the image to the edge datacenter with its metadata > > We are still in the progress of documenting the discussions and draw the architecture diagrams and flows for Keystone and Glance. for glance we'd like to deploy only one glance-api in the regional dc and configure glance/cache in each edge zone ... pointing all instances to a shared database this should solve the metadata problem and also provide for storage "locality" into every edge zone > In addition to the above we went through Dublin PTG wiki (https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG) capturing requirements: > > * we agreed to consider the list of requirements on the wiki finalized for now > * agreed to move there the additional requirements listed on the Use Cases (https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases) wiki page > > For the details on the discussions with related OpenStack projects you can check the following etherpads for notes: > > * Cinder: https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018 > * Glance: https://etherpad.openstack.org/p/glance-stein-edge-architecture > * Ironic: https://etherpad.openstack.org/p/ironic-stein-ptg-edge > * Keystone: https://etherpad.openstack.org/p/keystone-stein-edge-architecture > * Neutron: https://etherpad.openstack.org/p/neutron-stein-ptg > * Nova: https://etherpad.openstack.org/p/nova-ptg-stein > > Notes from the StarlingX sessions: https://etherpad.openstack.org/p/stx-PTG-agenda here is a link to the TripleO edge squad etherpad as well: https://etherpad.openstack.org/p/tripleo-edge-squad-status the edge squad is meeting weekly. > We are still working on the MVP architecture to clean it up and discuss comments and questions before moving it to a wiki page. Please let me know if you would like to get access to the document and I will share it with you. > > Please let me know if you have any questions or comments to the above captured items. thanks again! -- Giulio Fidente GPG KEY: 08D733BA From pkovar at redhat.com Wed Sep 26 15:08:05 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 26 Sep 2018 17:08:05 +0200 Subject: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core In-Reply-To: <0aa3ebd2-82d4-4a60-7162-c974c2d6449c@gmail.com> References: <20180919115022.825829a419ef7ac1573a76a0@redhat.com> <4f413d36-463e-477a-9886-79bf55df677c@suse.com> <07fcbf71a9406e8d7b918b238377d503@arcor.de> <0aa3ebd2-82d4-4a60-7162-c974c2d6449c@gmail.com> Message-ID: <20180926170805.9b76add6c421834f1af195ed@redhat.com> On Sat, 22 Sep 2018 23:32:06 +0900 "Ian Y. Choi" wrote: > Thanks a lot all for such nomination & agreement! > > I would like to do my best after I become doc-core as like what I > current do, > although I still need the help from so many kind, energetic, and > enthusiastic OpenStack contributors and core members > on OpenStack documentation and so many projects. Thank you, Ian. Just updated the perms, congrats on your new role! Best, pk > Melvin Hillsman wrote on 9/21/2018 5:31 AM: > > ++ > > > > On Thu, Sep 20, 2018 at 3:11 PM Frank Kloeker > > wrote: > > > > Am 2018-09-19 20:54, schrieb Andreas Jaeger: > > > On 2018-09-19 20:50, Petr Kovar wrote: > > >> Hi all, > > >> > > >> Based on our PTG discussion, I'd like to nominate Ian Y. Choi for > > >> membership in the openstack-doc-core team. I think Ian doesn't > > need an > > >> introduction, he's been around for a while, recently being deeply > > >> involved > > >> in infra work to get us robust support for project team docs > > >> translation and > > >> PDF builds. > > >> > > >> Having Ian on the core team will also strengthen our > > integration with > > >> the i18n community. > > >> > > >> Please let the ML know should you have any objections. > > > > > > The opposite ;), heartly agree with adding him, > > > > > > Andreas > > > > ++ > > > > Frank > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > -- > > Kind regards, > > > > Melvin Hillsman > > mrhillsman at gmail.com > > mobile: (832) 264-2646 > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Petr Kovar Documentation Program Manager | Red Hat Virtualization Customer Content Services | Red Hat Czech s.r.o. From melwittt at gmail.com Wed Sep 26 15:12:00 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 26 Sep 2018 08:12:00 -0700 Subject: [openstack-dev] [nova] review runways for Stein are open Message-ID: <45376088-887a-16dd-51e2-1e1db838f331@gmail.com> Just wanted to remind everyone that review runways for Stein are OPEN. Please feel free to add your approved, ready-for-review blueprints to the queue: https://etherpad.openstack.org/p/nova-runways-stein Cheers, -melanie From tpb at dyncloud.net Wed Sep 26 15:17:10 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 26 Sep 2018 11:17:10 -0400 Subject: [openstack-dev] [storyboard] why use different "bug" tags per project? In-Reply-To: References: <20180926132026.7pgkjmni6t7ttbvm@yuggoth.org> Message-ID: <20180926151710.4fmlk74z7snkufkq@barron.net> On 26/09/18 09:45 -0500, Ben Nemec wrote: > > >On 9/26/18 8:20 AM, Jeremy Stanley wrote: >>On 2018-09-26 00:50:16 -0600 (-0600), Chris Friesen wrote: >>>At the PTG, it was suggested that each project should tag their bugs with >>>"-bug" to avoid tags being "leaked" across projects, or something >>>like that. >>> >>>Could someone elaborate on why this was recommended? It seems to me that >>>it'd be better for all projects to just use the "bug" tag for consistency. >>> >>>If you want to get all bugs in a specific project it would be pretty easy to >>>search for stories with a tag of "bug" and a project of "X". >> >>Because stories are a cross-project concept and tags are applied to >>the story, it's possible for a story with tasks for both >>openstack/nova and openstack/cinder projects to represent a bug for >>one and a new feature for the other. If they're tagged nova-bug and >>cinder-feature then that would allow them to match the queries those >>teams have defined for their worklists, boards, et cetera. It's of >>course possible to just hand-wave that these intersections are rare >>enough to ignore and go ahead and use generic story tags, but the >>recommendation is there to allow teams to avoid disagreements in >>such cases. > >Would it be possible to automate that tagging on import? Essentially >tag every lp bug that is not wishlist with $PROJECT-bug and wishlists >with $PROJECT-feature. Otherwise someone has to go through and >re-categorize everything in Storyboard. > >I don't know if everyone would want that, but if this is the >recommended practice I would want it for Oslo. I would think this is a common want at least for the projects in the central box in the project map [1] -- Tom Barron (tbarron) [1] https://www.openstack.org/openstack-map > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Wed Sep 26 15:28:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 26 Sep 2018 11:28:35 -0400 Subject: [openstack-dev] [goal][python3] week 7 update Message-ID: This is week 7 of the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). == Things We Learned This Week == When we updated the tox.ini settings for jobs like pep8 and release notes early in the Rocky session we only touched some of the official repositories. I'll be working on making a list of the ones we missed so we can update them by the end of Stein. == Ongoing and Completed Work == Teams are making great progress, but it looks like we have some lingering changes in branches where the test jobs are failing. +---------------------+---------+--------------+---------+----------+---------+------------+-------+--------------------+ | Team | zuul | tox defaults | Docs | 3.6 unit | Failing | Unreviewed | Total | Champion | +---------------------+---------+--------------+---------+----------+---------+------------+-------+--------------------+ | adjutant | + | - | - | + | 0 | 0 | 5 | Doug Hellmann | | barbican | 11/ 13 | + | 1/ 3 | + | 6 | 4 | 20 | Doug Hellmann | | blazar | + | + | + | + | 0 | 0 | 25 | Nguyen Hai | | Chef OpenStack | + | - | - | - | 0 | 0 | 1 | Doug Hellmann | | cinder | + | + | + | + | 0 | 0 | 31 | Doug Hellmann | | cloudkitty | + | + | + | + | 0 | 0 | 24 | Doug Hellmann | | congress | + | + | + | + | 0 | 0 | 24 | Nguyen Hai | | cyborg | + | + | + | + | 0 | 0 | 16 | Nguyen Hai | | designate | + | + | + | + | 0 | 0 | 24 | Nguyen Hai | | Documentation | + | + | + | + | 0 | 0 | 22 | Doug Hellmann | | dragonflow | + | - | + | + | 0 | 0 | 6 | Nguyen Hai | | ec2-api | + | - | + | + | 0 | 0 | 12 | | | freezer | 3/ 23 | + | + | 2/ 4 | 2 | 0 | 33 | | | glance | + | 1/ 4 | + | + | 0 | 0 | 26 | Nguyen Hai | | heat | 3/ 27 | 1/ 5 | 1/ 6 | 1/ 7 | 3 | 2 | 45 | Doug Hellmann | | horizon | + | + | + | + | 0 | 0 | 11 | Nguyen Hai | | I18n | + | - | - | - | 0 | 0 | 2 | Doug Hellmann | | InteropWG | + | - | + | 1/ 3 | 0 | 0 | 10 | Doug Hellmann | | ironic | 12/ 60 | + | 2/ 13 | 1/ 12 | 0 | 0 | 90 | Doug Hellmann | | karbor | + | + | + | + | 0 | 0 | 22 | Nguyen Hai | | keystone | + | + | + | + | 0 | 0 | 47 | Doug Hellmann | | kolla | + | - | + | + | 0 | 0 | 12 | | | kuryr | + | + | + | + | 0 | 0 | 19 | Doug Hellmann | | magnum | + | + | + | + | 0 | 0 | 24 | | | manila | 3/ 19 | + | + | + | 3 | 3 | 28 | Goutham Pacha Ravi | | masakari | + | + | + | - | 0 | 0 | 21 | Nguyen Hai | | mistral | + | + | + | + | 0 | 0 | 37 | Nguyen Hai | | monasca | 1/ 66 | 1/ 7 | + | + | 2 | 1 | 90 | Doug Hellmann | | murano | + | + | + | + | 0 | 0 | 37 | | | neutron | 21/ 73 | + | 2/ 14 | 2/ 13 | 11 | 12 | 106 | Doug Hellmann | | nova | + | + | + | + | 0 | 0 | 37 | | | octavia | + | + | + | + | 0 | 0 | 34 | Nguyen Hai | | OpenStack Charms | 17/117 | - | - | - | 14 | 17 | 117 | Doug Hellmann | | OpenStack-Helm | + | - | + | - | 0 | 0 | 4 | | | OpenStackAnsible | 6/270 | + | 1/ 63 | - | 7 | 2 | 364 | | | OpenStackClient | + | + | + | + | 0 | 0 | 25 | | | OpenStackSDK | + | + | + | + | 0 | 0 | 25 | | | oslo | + | + | + | + | 0 | 0 | 219 | Doug Hellmann | | Packaging-rpm | + | - | + | + | 0 | 0 | 7 | Doug Hellmann | | PowerVMStackers | + | - | - | + | 0 | 0 | 18 | Doug Hellmann | | Puppet OpenStack | + | - | + | - | 0 | 0 | 236 | Doug Hellmann | | qinling | + | + | + | + | 0 | 0 | 12 | | | Quality Assurance | + | 1/ 2 | + | + | 1 | 0 | 51 | Doug Hellmann | | rally | + | + | + | - | 0 | 0 | 5 | Nguyen Hai | | Release Management | + | - | - | + | 0 | 0 | 2 | Doug Hellmann | | requirements | + | - | + | + | 0 | 0 | 7 | Doug Hellmann | | sahara | + | + | + | + | 0 | 0 | 39 | Doug Hellmann | | searchlight | + | + | + | + | 0 | 0 | 21 | Nguyen Hai | | senlin | + | + | + | + | 0 | 0 | 23 | Nguyen Hai | | SIGs | + | - | + | + | 0 | 0 | 9 | Doug Hellmann | | solum | + | + | + | + | 0 | 0 | 23 | Nguyen Hai | | storlets | + | + | + | + | 0 | 0 | 8 | | | swift | + | 2/ 2 | + | + | 0 | 0 | 16 | Nguyen Hai | | tacker | + | 1/ 2 | + | + | 1 | 1 | 23 | Nguyen Hai | | Technical Committee | + | - | - | + | 0 | 0 | 7 | Doug Hellmann | | Telemetry | 15/ 31 | + | 2/ 6 | 2/ 6 | 6 | 5 | 49 | Doug Hellmann | | tricircle | + | + | + | + | 0 | 0 | 14 | Nguyen Hai | | tripleo | 5/111 | + | + | + | 4 | 2 | 154 | Doug Hellmann | | trove | 12/ 17 | + | + | + | 0 | 0 | 25 | Doug Hellmann | | User Committee | + | - | 1/ 2 | - | 0 | 0 | 6 | Doug Hellmann | | vitrage | + | + | + | + | 0 | 0 | 25 | Nguyen Hai | | watcher | + | + | + | + | 0 | 0 | 27 | Nguyen Hai | | winstackers | + | + | + | + | 0 | 0 | 17 | | | zaqar | + | + | + | + | 0 | 0 | 24 | | | zun | + | + | + | + | 0 | 0 | 21 | Nguyen Hai | | | 53/ 65 | 42/ 48 | 51/ 58 | 50/ 56 | 60 | 50 | 2573 | | +---------------------+---------+--------------+---------+----------+---------+------------+-------+--------------------+ == Next Steps == All teams should be working to approve the patches proposed by the goal champions, and then to expand functional test coverage for python 3 and document their status in the wiki. == How can you help? == 1. Choose a patch that has failing tests and help fix it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) 2. Review the patches for the zuul changes. Keep in mind that some of those patches will be on the stable branches for projects. 3. Work on adding functional test jobs that run under Python 3. == How can you ask for help? == If you have any questions, please post them here to the openstack-dev list with the topic tag [python3] in the subject line. Posting questions to the mailing list will give the widest audience the chance to see the answers. We are using the #openstack-dev IRC channel for discussion as well, but I'm not sure how good our timezone coverage is so it's probably better to use the mailing list. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 From doug at doughellmann.com Wed Sep 26 15:58:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 26 Sep 2018 11:58:35 -0400 Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series Message-ID: It's time to start thinking about community-wide goals for the T series. We use community-wide goals to achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become too high - across all OpenStack projects. Community input is important to ensure that the TC makes good decisions about the goals. We need to consider the timing, cycle length, priority, and feasibility of the suggested goals. If you are interested in proposing a goal, please make sure that before the summit it is described in the tracking etherpad [1] and that you have started a mailing list thread on the openstack-dev list about the proposal so that everyone in the forum session [2] has an opportunity to consider the details. The forum session is only one step in the selection process. See [3] for more details. Doug [1] https://etherpad.openstack.org/p/community-goals [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 [3] https://governance.openstack.org/tc/goals/index.html From jpc at hpe.com Wed Sep 26 16:02:07 2018 From: jpc at hpe.com (Postlbauer, Juan) Date: Wed, 26 Sep 2018 16:02:07 +0000 Subject: [openstack-dev] [Heat] Bug in documentation? Message-ID: Hi everyone: I see that heat doc https://docs.openstack.org/heat/rocky/template_guide/openstack.html#OS::Nova ::Flavor states that ram ¶ Memory in MB for the flavor. disk ¶ Size of local disk in GB. That would be 1000*1000 for ram and 1000*1000*1000 for disk. But Nova doc https://developer.openstack.org/api-ref/compute/#create-flavor states that: ram body integer The amount of RAM a flavor has, in MiB. disk body integer The size of the root disk that will be created in GiB. That would be 1024*1024 for ram and 1024*1024*1024 for disk. Which, at least for ram, makes much more sense to me. Is this a typo in Heat documentation? Best Regards, Juan Postlbauer -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5756 bytes Desc: not available URL: From openstack at nemebean.com Wed Sep 26 16:04:03 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 26 Sep 2018 11:04:03 -0500 Subject: [openstack-dev] [tripleo] OVB 1.0 and Upcoming Changes Message-ID: (this is a reprint of a blog post I just made. I'm sending it here explicitly too because most (maybe all) of the major users are here. See also http://blog.nemebean.com/content/ovb-10-and-upcoming-changes) The time has come to declare a 1.0 version for OVB. There are a couple of reasons for this: 1. OVB has been stable for quite a while 2. It's time to start dropping support for ancient behaviors/clouds The first is somewhat self-explanatory. Since its inception, I have attempted to maintain backward compatibility to the earliest deployments of OVB. This hasn't always been 100% successful, but when incompatibilities were introduced they were considered bugs that had to be fixed. At this point the OVB interface has been stable for a significant period of time and it's time to lock that in. However, on that note it is also time to start dropping support for some of those earliest environments. The limitations of the original architecture make it more and more difficult to implement new features and there are very few to no users still relying on it. Declaring a 1.0 and creating a stable branch for it should allow us to move forward with new features while still providing a fallback for anyone who might still be using OVB on a Kilo-based cloud (for example). I'm not aware of any such users, but that doesn't mean they don't exist. Specifically, the following changes are expected for OVB 2.0: * Minimum host cloud version of Newton. This allows us to default to using Neutron port-security, which will simplify the configuration matrix immensely. * Drop support for parameters in environment files. All OVB configuration environment files should be using parameter_defaults now anyway, and some upcoming features require us to force the switch. This shouldn't be too painful as it mostly requires s/parameters:/parameter_defaults:/ in any existing environments. * Part of the previous point is a change to how ports and networks are created. This means that if users have created custom port or network layouts they will need to update their templates to reflect the new way of passing in network details. I don't know that anyone has done this, so I expect the impact to be small. The primary motivation for these changes is the work to support routed networks in OVB[1]. It requires customization of some networks that were hard-coded in the initial version of OVB, which means that making them configurable without breaking compatibility would be difficult/impossible. Since the necessary changes should only break very old style deployments, I feel it is time to make a clean cut and move on from them. As I noted earlier, I don't believe this will actually affect many OVB users, if any. If these changes do sound like they may break you, please contact me ASAP. It would be a good idea to test your use-case against the routed-networks branch[1] to make sure it still works. If so, great! There's nothing to do. That branch already includes most of the breaking changes. If not, we can investigate how to maintain compatibility, or if that's not possible you may need to continue using the 1.0 branch of OVB which will exist indefinitely for users who still absolutely need the old behaviors and can't move forward for any reason. There is currently no specific timeline for when these changes will merge back to master, but I hope to get it done in the relatively near future. Don't procrastinate. :-) Some of these changes have been coming for a while - the lack of port-security in the default templates is starting to cause more grief than maintaining backward compatibility saves. The routed networks architecture is a realization of the original goal for OVB, which is to deploy arbitrarily complex environments for testing deployment tools. If you want some geek porn, check out this network diagram[2] for routed networks. It's pretty cool to be able to deploy such a complex environment with a couple of configuration files and a single command. Once it is possible to customize all the networks it should be possible to deploy just about any environment imaginable (challenge accepted... ;-). This is a significant milestone for OVB and I look forward to seeing it in action. -Ben 1: https://github.com/cybertron/openstack-virtual-baremetal/tree/routed-networks 2: https://plus.google.com/u/0/+BenNemec/posts/5nGJ3Rzt2iL From nickgrwl3 at gmail.com Wed Sep 26 16:11:25 2018 From: nickgrwl3 at gmail.com (Niket Agrawal) Date: Wed, 26 Sep 2018 18:11:25 +0200 Subject: [openstack-dev] Ryu integration with Openstack Message-ID: Hello, I have a question regarding the Ryu integration in Openstack. By default, the openvswitch bridges (br-int, br-tun and br-ex) are registered to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute node. However there is a different instance of the same Ryu controller running on the neutron gateway as well and the three openvswitch bridges (br-int, br-tun and br-ex) are registered to this instance of Ryu controller. If I stop neutron-openvswitch agent on the nova compute node, the bridges there are no longer connected to the controller, but the bridges in the neutron gateway continue to remain connected to the controller. Only when I stop the neutron openvswitch agent in the neutron gateway as well, the bridges there get disconnected. I'm unable to find where in the Openstack code I can access this implementation, because I intend to make a few tweaks to this architecture which is present currently. Also, I'd like to know which app is the Ryu SDN controller running by default at the moment. I feel the information in the code can help me find it too. Regards, Niket -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Sep 26 16:49:37 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 26 Sep 2018 11:49:37 -0500 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: <3138793d-f86e-2ea2-0b0d-959bcd6b88af@openstack.org> References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> <3138793d-f86e-2ea2-0b0d-959bcd6b88af@openstack.org> Message-ID: On 9/25/18 3:29 AM, Thierry Carrez wrote: > Doug Hellmann wrote: >> I think we need to reconsider that position if it's going to block >> adoption. I think Ben's case is an excellent second example of where >> having a field to hold some sort of priority value would be useful. > > Absence of priorities was an initial design choice[1] based on the fact > that in an open collaboration every group, team, organization has their > own views on what the priority of a story is, so worklist and tags are > better ways to capture that. Also they don't really work unless you > triage everything. And then nobody really looks at them to prioritize > their work, so they are high cost for little benefit. So was the storyboard implementation based on the rant section then? Because I don't know that I agree with/understand some of the assertions there. First, don't we _need_ to triage everything? At least on some minimal level? Not looking at new bugs at all seems like the way you end up with a security bug open for two years *ahem*. Not that I would know anything about that (it's been fixed now, FTR). I'm also not sure I agree with the statement that setting a priority for a blueprint is useless. Prioritizing feature work is something everyone needs to do these days since no team has enough people to implement every proposed feature. Maybe the proposal is for everyone to adopt Nova-style runways, but I'm not sure how well that works for smaller projects where many of the developers are only able to devote part of their time to it. Setting a time window for a feature to merge or get kicked to the back of line would be problematic for me. That section also ends with an unanswered question regarding how to do bug triage in this model, which I guess is the thing we're trying to address with this discussion. > > That said, it definitely creates friction, because alternatives are less > convenient / visible, and it's not how other tools work... so the > "right" answer here may not be the "best" answer. > > [1] https://wiki.openstack.org/wiki/StoryBoard/Priority > Also, like it or not there is technical debt we're carrying over here. All of our bug triage up to this point has been based on launchpad priorities, and as I think I noted elsewhere it would be a big step backward to completely throw that out. Whatever model for prioritization and triage that we choose, I feel like there needs to be a reasonable migration path for the thousands of existing triaged lp bugs in OpenStack. From openstack at nemebean.com Wed Sep 26 17:18:19 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 26 Sep 2018 12:18:19 -0500 Subject: [openstack-dev] [storyboard][oslo] Fewer stories than bugs? In-Reply-To: References: <61799e53-2fa6-40a7-ebbd-a1f3df624a8f@nemebean.com> Message-ID: <79f76745-d40a-f170-0f1e-72efb5e36688@nemebean.com> Okay, I found a few bugs that are in launchpad but not storyboard: https://bugs.launchpad.net/python-stevedore/+bug/1784823 https://bugs.launchpad.net/pbr/+bug/1777625 https://bugs.launchpad.net/taskflow/+bug/1756520 https://bugs.launchpad.net/pbr/+bug/1742809 The latter three are all in an incomplete state, so maybe that's being ignored by the migration script? The first appears to be a completely missing project. None of the stevedore bugs I've spot checked are in storyboard. Maybe it has to do with the fact that the project name is stevedore but the bug link is python-stevedore? I'm not sure why that is, but there may be something a little weird going on with that project. On 9/25/18 1:22 PM, Kendall Nelson wrote: > Hey Ben, > > I am looking into it! I am guessing that some of the discrepancy is bugs > being filed after I did the migration. I might also have missed one of > the launchpad projects. I will redo the migrations today and we can see > if the numbers match up after (or are at least much closer). > > We've never had an issue with stories not being created and there were > no errors in any of the runs I did of the migration scripts. I'm > guessing PEBKAC :) > > -Kendall (diablo_rojo) > > On Mon, Sep 24, 2018 at 2:38 PM Ben Nemec > wrote: > > This is a more oslo-specific (maybe) question that came out of the test > migration. I noticed that launchpad is reporting 326 open bugs across > the Oslo projects, but in Storyboard there are only 266 stories > created. > While I'm totally onboard with reducing our bug backlog, I'm curious > why > that is the case. I'm speculating that maybe Launchpad counts bugs that > affect multiple Oslo projects as multiple bugs whereas Storyboard is > counting them as a single story? > > I think we were also going to skip > https://bugs.launchpad.net/openstack-infra which for some reason > appeared in the oslo group, but that's only two bugs so it doesn't > account for anywhere near the full difference. > > Mostly I just want to make sure we didn't miss something. I'm hoping > this is a known behavior and we don't have to start comparing bug lists > to find the difference. :-) > > Thanks. > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jpenick at gmail.com Wed Sep 26 17:35:05 2018 From: jpenick at gmail.com (James Penick) Date: Wed, 26 Sep 2018 10:35:05 -0700 Subject: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions In-Reply-To: <1537953028.1215308.1521068936.71ABA285@webmail.messagingengine.com> References: <5B518A5C-10FD-45D1-B462-E5A02DBBE2FE@gmail.com> <1537953028.1215308.1521068936.71ABA285@webmail.messagingengine.com> Message-ID: Hey Colleen, >This sounds like it is based on the customizations done at Oath, which to my recollection did not use the actual federation implementation in keystone due to its reliance on Athenz (I think?) as an identity manager. Something similar can be accomplished in standard keystone with the mapping API in keystone which can cause dynamic generation of a shadow user, project and role assignments. You're correct, this was more about the general design of asymmetrical token based authentication rather that our exact implementation with Athenz. We didn't use the shadow users because Athenz authentication in our implementation is done via an 'ntoken' which is Athenz' older method for identification, so it was it more straightforward for us to resurrect the PKI driver. The new way is via mTLS, where the user can identify themselves via a client cert. I imagine we'll need to move our implementation to use shadow users as a part of that change. >We have historically pushed back hard against allowing setting a project ID via the API, though I can see predictable-but-not-settable as less problematic. Yup, predictable-but-not-settable is what we need. Basically as long as the uuid is a hash of the string, we're good. I definitely don't want to be able to set a user ID or project ID via API, because of the security and operability problems that could arise. In my mind this would just be a config setting. >One of the use cases from the past was being able to use the same token in different regions, which is problematic from a security perspective. Is that that idea here? Or could someone provide more details on why this is needed? Well, sorta. As far as we're concerned you can get authenticate to keystone in each region independently using your credential from the IdP. Our use cases are more about simplifying federation of other systems, like Glance. Say I create an image and a member list for that image. I'd like to be able to copy that image *and* all of its metadata straight across to another cluster and have things Just Work without needing to look up and resolve the new UUIDs on the new cluster. However, for deployers who wish to use Keystone as their IdP, then in that case they'll need to use that keystone credential to establish a credential in the keystone cluster in that region. -James On Wed, Sep 26, 2018 at 2:10 AM Colleen Murphy wrote: > Thanks for the summary, Ildiko. I have some questions inline. > > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote: > > > > > > > We agreed to prefer federation for Keystone and came up with two work > > items to cover missing functionality: > > > > * Keystone to trust a token from an ID Provider master and when the auth > > method is called, perform an idempotent creation of the user, project > > and role assignments according to the assertions made in the token > > This sounds like it is based on the customizations done at Oath, which to > my recollection did not use the actual federation implementation in > keystone due to its reliance on Athenz (I think?) as an identity manager. > Something similar can be accomplished in standard keystone with the mapping > API in keystone which can cause dynamic generation of a shadow user, > project and role assignments. > > > * Keystone should support the creation of users and projects with > > predictable UUIDs (eg.: hash of the name of the users and projects). > > This greatly simplifies Image federation and telemetry gathering > > I was in and out of the room and don't recall this discussion exactly. We > have historically pushed back hard against allowing setting a project ID > via the API, though I can see predictable-but-not-settable as less > problematic. One of the use cases from the past was being able to use the > same token in different regions, which is problematic from a security > perspective. Is that that idea here? Or could someone provide more details > on why this is needed? > > Were there any volunteers to help write up specs and work on the > implementations in keystone? > > > > Colleen (cmurphy) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Sep 26 17:51:47 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 26 Sep 2018 12:51:47 -0500 Subject: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions In-Reply-To: References: <5B518A5C-10FD-45D1-B462-E5A02DBBE2FE@gmail.com> <1537953028.1215308.1521068936.71ABA285@webmail.messagingengine.com> <7a50c32b-c88d-b362-95bf-bae2e534b91d@gmail.com> Message-ID: For those who may be following along and are not familiar with what we mean by federated auto-provisioning [0]. [0] https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#auto-provisioning On Wed, Sep 26, 2018 at 9:06 AM Morgan Fainberg wrote: > This discussion was also not about user assigned IDs, but predictable IDs > with the auto provisioning. We still want it to be something keystone > controls (locally). It might be hash domain ID and value from assertion ( > similar.to the LDAP user ID generator). As long as within an environment, > the IDs are predictable when auto provisioning via federation, we should be > good. And the problem of the totally unknown ID until provisioning could be > made less of an issue for someone working within a massively federated edge > environment. > > I don't want user/explicit admin set IDs. > > On Wed, Sep 26, 2018, 04:43 Jay Pipes wrote: > >> On 09/26/2018 05:10 AM, Colleen Murphy wrote: >> > Thanks for the summary, Ildiko. I have some questions inline. >> > >> > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote: >> > >> > >> > >> >> >> >> We agreed to prefer federation for Keystone and came up with two work >> >> items to cover missing functionality: >> >> >> >> * Keystone to trust a token from an ID Provider master and when the >> auth >> >> method is called, perform an idempotent creation of the user, project >> >> and role assignments according to the assertions made in the token >> > >> > This sounds like it is based on the customizations done at Oath, which >> to my recollection did not use the actual federation implementation in >> keystone due to its reliance on Athenz (I think?) as an identity manager. >> Something similar can be accomplished in standard keystone with the mapping >> API in keystone which can cause dynamic generation of a shadow user, >> project and role assignments. >> > >> >> * Keystone should support the creation of users and projects with >> >> predictable UUIDs (eg.: hash of the name of the users and projects). >> >> This greatly simplifies Image federation and telemetry gathering >> > >> > I was in and out of the room and don't recall this discussion exactly. >> We have historically pushed back hard against allowing setting a project ID >> via the API, though I can see predictable-but-not-settable as less >> problematic. One of the use cases from the past was being able to use the >> same token in different regions, which is problematic from a security >> perspective. Is that that idea here? Or could someone provide more details >> on why this is needed? >> >> Hi Colleen, >> >> I wasn't in the room for this conversation either, but I believe the >> "use case" wanted here is mostly a convenience one. If the edge >> deployment is composed of hundreds of small Keystone installations and >> you have a user (e.g. an NFV MANO user) which should have visibility >> across all of those Keystone installations, it becomes a hassle to need >> to remember (or in the case of headless users, store some lookup of) all >> the different tenant and user UUIDs for what is essentially the same >> user across all of those Keystone installations. >> >> I'd argue that as long as it's possible to create a Keystone tenant and >> user with a unique name within a deployment, and as long as it's >> possible to authenticate using the tenant and user *name* (i.e. not the >> UUID), then this isn't too big of a problem. However, I do know that a >> bunch of scripts and external tools rely on setting the tenant and/or >> user via the UUID values and not the names, so that might be where this >> feature request is coming from. >> >> Hope that makes sense? >> >> Best, >> -jay >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Sep 26 18:26:45 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 26 Sep 2018 14:26:45 -0400 Subject: [openstack-dev] [ptl][release] Proposed changes for cycle-with-milestones deliverables In-Reply-To: <20180926143626.eohywqxdy6tkhfbc@yuggoth.org> References: <20180926142229.GA26870@sm-workstation> <20180926143626.eohywqxdy6tkhfbc@yuggoth.org> Message-ID: Jeremy Stanley writes: > On 2018-09-26 09:22:30 -0500 (-0500), Sean McGinnis wrote: > [...] >> It tests the release automation machinery to identify problems >> before the RC and final release crunch time. > [...] > > More to the point, it helped spot changes to projects which made it > impossible to generate and publish their release artifacts. Coverage > has improved for finding these issues before merging now, as well as > in flight tests on proposed releases, making the risk lower than it > used to be. The new set of packaging jobs that are part of the publish-to-pypi-python3 project template also include a check queue job that runs when any of the packaging files (setup.*, README.rst, etc.) are modified. That should give us an even earlier warning of any packaging failures. Since all python projects will soon use the same release jobs, we will know that the job is working in general based on other releases (including more liberal use of our test repository before big deadlines). Doug From doug at doughellmann.com Wed Sep 26 18:30:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 26 Sep 2018 14:30:26 -0400 Subject: [openstack-dev] [storyboard][oslo] Fewer stories than bugs? In-Reply-To: <79f76745-d40a-f170-0f1e-72efb5e36688@nemebean.com> References: <61799e53-2fa6-40a7-ebbd-a1f3df624a8f@nemebean.com> <79f76745-d40a-f170-0f1e-72efb5e36688@nemebean.com> Message-ID: Ben Nemec writes: > Okay, I found a few bugs that are in launchpad but not storyboard: > > https://bugs.launchpad.net/python-stevedore/+bug/1784823 > https://bugs.launchpad.net/pbr/+bug/1777625 > https://bugs.launchpad.net/taskflow/+bug/1756520 > https://bugs.launchpad.net/pbr/+bug/1742809 > > The latter three are all in an incomplete state, so maybe that's being > ignored by the migration script? The first appears to be a completely > missing project. None of the stevedore bugs I've spot checked are in > storyboard. Maybe it has to do with the fact that the project name is > stevedore but the bug link is python-stevedore? I'm not sure why that > is, but there may be something a little weird going on with that > project. The name "stevedore" was taking on LP when I registered that project, so I had to use an alternative name. Doug From Tim.Bell at cern.ch Wed Sep 26 18:55:49 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 26 Sep 2018 18:55:49 +0000 Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: Message-ID: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Doug, Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. To give it some context and the motivation: At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. I would strongly support a goal which targets - All new projects should have the end user facing functionality fully exposed via the unified client - Existing projects should aim to close the gap within 'N' cycles (N to be defined) - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. Tim -----Original Message----- From: Doug Hellmann Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 26 September 2018 at 18:00 To: openstack-dev , openstack-operators , openstack-sigs Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series It's time to start thinking about community-wide goals for the T series. We use community-wide goals to achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become too high - across all OpenStack projects. Community input is important to ensure that the TC makes good decisions about the goals. We need to consider the timing, cycle length, priority, and feasibility of the suggested goals. If you are interested in proposing a goal, please make sure that before the summit it is described in the tracking etherpad [1] and that you have started a mailing list thread on the openstack-dev list about the proposal so that everyone in the forum session [2] has an opportunity to consider the details. The forum session is only one step in the selection process. See [3] for more details. Doug [1] https://etherpad.openstack.org/p/community-goals [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 [3] https://governance.openstack.org/tc/goals/index.html __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Kevin.Fox at pnnl.gov Wed Sep 26 19:16:47 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 26 Sep 2018 19:16:47 +0000 Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> References: , <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: <1A3C52DFCD06494D8528644858247BF01C1AF6BB@EX10MBOX03.pnnl.gov> +1 :) ________________________________________ From: Tim Bell [Tim.Bell at cern.ch] Sent: Wednesday, September 26, 2018 11:55 AM To: OpenStack Development Mailing List (not for usage questions); openstack-operators; openstack-sigs Subject: Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series Doug, Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. To give it some context and the motivation: At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. I would strongly support a goal which targets - All new projects should have the end user facing functionality fully exposed via the unified client - Existing projects should aim to close the gap within 'N' cycles (N to be defined) - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. Tim -----Original Message----- From: Doug Hellmann Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 26 September 2018 at 18:00 To: openstack-dev , openstack-operators , openstack-sigs Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series It's time to start thinking about community-wide goals for the T series. We use community-wide goals to achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become too high - across all OpenStack projects. Community input is important to ensure that the TC makes good decisions about the goals. We need to consider the timing, cycle length, priority, and feasibility of the suggested goals. If you are interested in proposing a goal, please make sure that before the summit it is described in the tracking etherpad [1] and that you have started a mailing list thread on the openstack-dev list about the proposal so that everyone in the forum session [2] has an opportunity to consider the details. The forum session is only one step in the selection process. See [3] for more details. Doug [1] https://etherpad.openstack.org/p/community-goals [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 [3] https://governance.openstack.org/tc/goals/index.html __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ openstack-sigs mailing list openstack-sigs at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From Arkady.Kanevsky at dell.com Wed Sep 26 19:22:21 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Wed, 26 Sep 2018 19:22:21 +0000 Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: <0ec720bd34de4ff09ffad24b7887edfc@AUSX13MPS308.AMER.DELL.COM> +1 -----Original Message----- From: Tim Bell [mailto:Tim.Bell at cern.ch] Sent: Wednesday, September 26, 2018 1:56 PM To: OpenStack Development Mailing List (not for usage questions); openstack-operators; openstack-sigs Subject: Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series [EXTERNAL EMAIL] Please report any suspicious attachments, links, or requests for sensitive information. Doug, Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. To give it some context and the motivation: At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. I would strongly support a goal which targets - All new projects should have the end user facing functionality fully exposed via the unified client - Existing projects should aim to close the gap within 'N' cycles (N to be defined) - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. Tim -----Original Message----- From: Doug Hellmann Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 26 September 2018 at 18:00 To: openstack-dev , openstack-operators , openstack-sigs Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series It's time to start thinking about community-wide goals for the T series. We use community-wide goals to achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become too high - across all OpenStack projects. Community input is important to ensure that the TC makes good decisions about the goals. We need to consider the timing, cycle length, priority, and feasibility of the suggested goals. If you are interested in proposing a goal, please make sure that before the summit it is described in the tracking etherpad [1] and that you have started a mailing list thread on the openstack-dev list about the proposal so that everyone in the forum session [2] has an opportunity to consider the details. The forum session is only one step in the selection process. See [3] for more details. Doug [1] https://etherpad.openstack.org/p/community-goals [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 [3] https://governance.openstack.org/tc/goals/index.html __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ openstack-sigs mailing list openstack-sigs at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From kennelson11 at gmail.com Wed Sep 26 19:34:17 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 26 Sep 2018 12:34:17 -0700 Subject: [openstack-dev] [storyboard][oslo] Fewer stories than bugs? In-Reply-To: References: <61799e53-2fa6-40a7-ebbd-a1f3df624a8f@nemebean.com> <79f76745-d40a-f170-0f1e-72efb5e36688@nemebean.com> Message-ID: So I 100% messed up that migration. Looking back at my terminal history I migrated 'stevedore' instead of 'python-stevedore'. I migrated the correct project now and all should be well. Some napkin math says that the number oslo bugs in lp minus oslo-incubator (it was concluded we didn't need to migrate that one) matches the number of stories in the oslo project group in StoryBoard. Sorry for the confusion! -Kendall (diablo_rojo) On Wed, Sep 26, 2018 at 11:30 AM Doug Hellmann wrote: > Ben Nemec writes: > > > Okay, I found a few bugs that are in launchpad but not storyboard: > > > > https://bugs.launchpad.net/python-stevedore/+bug/1784823 > > https://bugs.launchpad.net/pbr/+bug/1777625 > > https://bugs.launchpad.net/taskflow/+bug/1756520 > > https://bugs.launchpad.net/pbr/+bug/1742809 > > > > The latter three are all in an incomplete state, so maybe that's being > > ignored by the migration script? The first appears to be a completely > > missing project. None of the stevedore bugs I've spot checked are in > > storyboard. Maybe it has to do with the fact that the project name is > > stevedore but the bug link is python-stevedore? I'm not sure why that > > is, but there may be something a little weird going on with that > > project. > > The name "stevedore" was taking on LP when I registered that project, so > I had to use an alternative name. > > Doug > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Sep 26 19:38:40 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 26 Sep 2018 12:38:40 -0700 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> <3138793d-f86e-2ea2-0b0d-959bcd6b88af@openstack.org> Message-ID: On Wed, Sep 26, 2018 at 9:50 AM Ben Nemec wrote: > > > On 9/25/18 3:29 AM, Thierry Carrez wrote: > > Doug Hellmann wrote: > >> I think we need to reconsider that position if it's going to block > >> adoption. I think Ben's case is an excellent second example of where > >> having a field to hold some sort of priority value would be useful. > > > > Absence of priorities was an initial design choice[1] based on the fact > > that in an open collaboration every group, team, organization has their > > own views on what the priority of a story is, so worklist and tags are > > better ways to capture that. Also they don't really work unless you > > triage everything. And then nobody really looks at them to prioritize > > their work, so they are high cost for little benefit. > > So was the storyboard implementation based on the rant section then? > Because I don't know that I agree with/understand some of the assertions > there. > > First, don't we _need_ to triage everything? At least on some minimal > level? Not looking at new bugs at all seems like the way you end up with > a security bug open for two years *ahem*. Not that I would know anything > about that (it's been fixed now, FTR). > > I'm also not sure I agree with the statement that setting a priority for > a blueprint is useless. Prioritizing feature work is something everyone > needs to do these days since no team has enough people to implement > every proposed feature. Maybe the proposal is for everyone to adopt > Nova-style runways, but I'm not sure how well that works for smaller > projects where many of the developers are only able to devote part of > their time to it. Setting a time window for a feature to merge or get > kicked to the back of line would be problematic for me. > > That section also ends with an unanswered question regarding how to do > bug triage in this model, which I guess is the thing we're trying to > address with this discussion. > > > > > That said, it definitely creates friction, because alternatives are less > > convenient / visible, and it's not how other tools work... so the > > "right" answer here may not be the "best" answer. > > > > [1] https://wiki.openstack.org/wiki/StoryBoard/Priority > > > > Also, like it or not there is technical debt we're carrying over here. > All of our bug triage up to this point has been based on launchpad > priorities, and as I think I noted elsewhere it would be a big step > backward to completely throw that out. Whatever model for prioritization > and triage that we choose, I feel like there needs to be a reasonable > migration path for the thousands of existing triaged lp bugs in OpenStack. > The information is being migrated[1], we just don't expose it in the webclient. You could still access the info via the API. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -Kendall (diablo_rojo) [1] https://github.com/openstack-infra/storyboard/blob/master/storyboard/migrate/launchpad/writer.py#L183 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgagne at calavera.ca Wed Sep 26 19:40:45 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Wed, 26 Sep 2018 15:40:45 -0400 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: +1 Yes please! -- Mathieu On Wed, Sep 26, 2018 at 2:56 PM Tim Bell wrote: > > > Doug, > > Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. > > To give it some context and the motivation: > > At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). > > One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. > > I would strongly support a goal which targets > > - All new projects should have the end user facing functionality fully exposed via the unified client > - Existing projects should aim to close the gap within 'N' cycles (N to be defined) > - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) > - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) > > The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. > > It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. > > Tim > > -----Original Message----- > From: Doug Hellmann > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 26 September 2018 at 18:00 > To: openstack-dev , openstack-operators , openstack-sigs > Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series > > It's time to start thinking about community-wide goals for the T series. > > We use community-wide goals to achieve visible common changes, push for > basic levels of consistency and user experience, and efficiently improve > certain areas where technical debt payments have become too high - > across all OpenStack projects. Community input is important to ensure > that the TC makes good decisions about the goals. We need to consider > the timing, cycle length, priority, and feasibility of the suggested > goals. > > If you are interested in proposing a goal, please make sure that before > the summit it is described in the tracking etherpad [1] and that you > have started a mailing list thread on the openstack-dev list about the > proposal so that everyone in the forum session [2] has an opportunity to > consider the details. The forum session is only one step in the > selection process. See [3] for more details. > > Doug > > [1] https://etherpad.openstack.org/p/community-goals > [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 > [3] https://governance.openstack.org/tc/goals/index.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From mordred at inaugust.com Wed Sep 26 19:47:53 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 26 Sep 2018 14:47:53 -0500 Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: On 09/26/2018 01:55 PM, Tim Bell wrote: > > Doug, > > Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. > > To give it some context and the motivation: > > At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). > > One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. > > I would strongly support a goal which targets > > - All new projects should have the end user facing functionality fully exposed via the unified client > - Existing projects should aim to close the gap within 'N' cycles (N to be defined) > - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) > - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) > > The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. > > It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. ++ It's also worth noting that we're REALLY close to a 1.0 of openstacksdk (all the patches are in flight, we just need to land them) - and once we've got that we'll be in a position to start shifting python-openstackclient to using openstacksdk instead of python-*client. This will have the additional benefit that, once we've migrated CLIs to python-openstackclient as per this goal, and once we've migrated openstackclient itself to openstacksdk, the number of different libraries one needs to install to interact with openstack will be _dramatically_ lower. > -----Original Message----- > From: Doug Hellmann > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 26 September 2018 at 18:00 > To: openstack-dev , openstack-operators , openstack-sigs > Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series > > It's time to start thinking about community-wide goals for the T series. > > We use community-wide goals to achieve visible common changes, push for > basic levels of consistency and user experience, and efficiently improve > certain areas where technical debt payments have become too high - > across all OpenStack projects. Community input is important to ensure > that the TC makes good decisions about the goals. We need to consider > the timing, cycle length, priority, and feasibility of the suggested > goals. > > If you are interested in proposing a goal, please make sure that before > the summit it is described in the tracking etherpad [1] and that you > have started a mailing list thread on the openstack-dev list about the > proposal so that everyone in the forum session [2] has an opportunity to > consider the details. The forum session is only one step in the > selection process. See [3] for more details. > > Doug > > [1] https://etherpad.openstack.org/p/community-goals > [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 > [3] https://governance.openstack.org/tc/goals/index.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zbitter at redhat.com Wed Sep 26 19:49:44 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 26 Sep 2018 15:49:44 -0400 Subject: [openstack-dev] [Heat] Bug in documentation? In-Reply-To: References: Message-ID: <9a43fd9f-17de-061a-e689-96f27e2861db@redhat.com> On 26/09/18 12:02 PM, Postlbauer, Juan wrote: > Hi everyone: > > I see that heat doc > https://docs.openstack.org/heat/rocky/template_guide/openstack.html#OS::Nova::Flavor > states that > > *ram¶ > * > > Memory in MB for the flavor. > > *disk¶ > * > > Size of local disk in GB. > > That would be 1000*1000 for ram and 1000*1000*1000 for disk. > > But Nova doc > https://developer.openstack.org/api-ref/compute/#create-flavor states that: > > ram > > > > body > > > > integer > > > > The amount of RAM a flavor has, in MiB. > > disk > > > > body > > > > integer > > > > The size of the root disk that will be created in GiB. > > That would be 1024*1024 for ram and 1024*1024*1024 for disk. Which, at > least for ram, makes much more sense to me. > > Is this a typo in Heat documentation? No, but it's ambiguous in a way that MiB/GiB would not be. Feel free to submit a patch. - ZB From doug at doughellmann.com Wed Sep 26 20:01:37 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 26 Sep 2018 16:01:37 -0400 Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: Monty Taylor writes: > On 09/26/2018 01:55 PM, Tim Bell wrote: >> >> Doug, >> >> Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. >> >> To give it some context and the motivation: >> >> At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). >> >> One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. >> >> I would strongly support a goal which targets >> >> - All new projects should have the end user facing functionality fully exposed via the unified client >> - Existing projects should aim to close the gap within 'N' cycles (N to be defined) >> - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) >> - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) >> >> The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. >> >> It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. > > ++ > > It's also worth noting that we're REALLY close to a 1.0 of openstacksdk > (all the patches are in flight, we just need to land them) - and once > we've got that we'll be in a position to start shifting > python-openstackclient to using openstacksdk instead of python-*client. > > This will have the additional benefit that, once we've migrated CLIs to > python-openstackclient as per this goal, and once we've migrated > openstackclient itself to openstacksdk, the number of different > libraries one needs to install to interact with openstack will be > _dramatically_ lower. Would it be useful to have the SDK work in OSC as a prerequisite to the goal work? I would hate to have folks have to write a bunch of things twice. Do we have any sort of list of which projects aren't currently being handled by OSC? If we could get some help building such a list, that would help us understand the scope of the work. As far as admin features, I think we've been hesitant to add those to OSC in the past, but I can see the value. I wonder if having them in a separate library makes sense? Or is it better to have commands in the tool that regular users can't access, and just report the permission error when they try to run the command? Doug > >> -----Original Message----- >> From: Doug Hellmann >> Reply-To: "OpenStack Development Mailing List (not for usage questions)" >> Date: Wednesday, 26 September 2018 at 18:00 >> To: openstack-dev , openstack-operators , openstack-sigs >> Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series >> >> It's time to start thinking about community-wide goals for the T series. >> >> We use community-wide goals to achieve visible common changes, push for >> basic levels of consistency and user experience, and efficiently improve >> certain areas where technical debt payments have become too high - >> across all OpenStack projects. Community input is important to ensure >> that the TC makes good decisions about the goals. We need to consider >> the timing, cycle length, priority, and feasibility of the suggested >> goals. >> >> If you are interested in proposing a goal, please make sure that before >> the summit it is described in the tracking etherpad [1] and that you >> have started a mailing list thread on the openstack-dev list about the >> proposal so that everyone in the forum session [2] has an opportunity to >> consider the details. The forum session is only one step in the >> selection process. See [3] for more details. >> >> Doug >> >> [1] https://etherpad.openstack.org/p/community-goals >> [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 >> [3] https://governance.openstack.org/tc/goals/index.html >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tpb at dyncloud.net Wed Sep 26 20:27:52 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 26 Sep 2018 16:27:52 -0400 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: <20180926202752.j56dtfyahnw4triq@barron.net> On 26/09/18 18:55 +0000, Tim Bell wrote: > >Doug, > >Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. > >To give it some context and the motivation: > >At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). > >One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). Tim, First, I endorse this goal. That said, lack of coverage of Manila in the OpenStack client was articulated as a need (by CERN and others) during the Vancouver Forum. At the recent Manila PTG we set addressing this technical debt as a Stein cycle goal, as well as OpenStack SDK integration for Manila. -- Tom Barron (tbarron) > In other cases, there are subsets of the function which require the native project client. > >I would strongly support a goal which targets > >- All new projects should have the end user facing functionality fully exposed via the unified client >- Existing projects should aim to close the gap within 'N' cycles (N to be defined) >- Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) >- Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) > >The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. > >It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. > >Tim > >-----Original Message----- >From: Doug Hellmann >Reply-To: "OpenStack Development Mailing List (not for usage questions)" >Date: Wednesday, 26 September 2018 at 18:00 >To: openstack-dev , openstack-operators , openstack-sigs >Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series > > It's time to start thinking about community-wide goals for the T series. > > We use community-wide goals to achieve visible common changes, push for > basic levels of consistency and user experience, and efficiently improve > certain areas where technical debt payments have become too high - > across all OpenStack projects. Community input is important to ensure > that the TC makes good decisions about the goals. We need to consider > the timing, cycle length, priority, and feasibility of the suggested > goals. > > If you are interested in proposing a goal, please make sure that before > the summit it is described in the tracking etherpad [1] and that you > have started a mailing list thread on the openstack-dev list about the > proposal so that everyone in the forum session [2] has an opportunity to > consider the details. The forum session is only one step in the > selection process. See [3] for more details. > > Doug > > [1] https://etherpad.openstack.org/p/community-goals > [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 > [3] https://governance.openstack.org/tc/goals/index.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >_______________________________________________ >openstack-sigs mailing list >openstack-sigs at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From skaplons at redhat.com Wed Sep 26 20:30:48 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 26 Sep 2018 22:30:48 +0200 Subject: [openstack-dev] Ryu integration with Openstack In-Reply-To: References: Message-ID: <418337EF-FF89-4B26-8EF7-B8C5CF752E24@redhat.com> Hi, > Wiadomość napisana przez Niket Agrawal w dniu 26.09.2018, o godz. 18:11: > > Hello, > > I have a question regarding the Ryu integration in Openstack. By default, the openvswitch bridges (br-int, br-tun and br-ex) are registered to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute node. However there is a different instance of the same Ryu controller running on the neutron gateway as well and the three openvswitch bridges (br-int, br-tun and br-ex) are registered to this instance of Ryu controller. If I stop neutron-openvswitch agent on the nova compute node, the bridges there are no longer connected to the controller, but the bridges in the neutron gateway continue to remain connected to the controller. Only when I stop the neutron openvswitch agent in the neutron gateway as well, the bridges there get disconnected. > > I'm unable to find where in the Openstack code I can access this implementation, because I intend to make a few tweaks to this architecture which is present currently. Also, I'd like to know which app is the Ryu SDN controller running by default at the moment. I feel the information in the code can help me find it too. Ryu app is started by neutron-openvswitch-agent in: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34 Is it what You are looking for? > > Regards, > Niket > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From mriedemos at gmail.com Wed Sep 26 20:44:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 26 Sep 2018 15:44:53 -0500 Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> On 9/26/2018 3:01 PM, Doug Hellmann wrote: > Monty Taylor writes: > >> On 09/26/2018 01:55 PM, Tim Bell wrote: >>> Doug, >>> >>> Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. I would personally like to thank the person that put that goal in the etherpad...they must have had amazing foresight and unparalleled modesty. >>> >>> To give it some context and the motivation: >>> >>> At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). >>> >>> One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. >>> >>> I would strongly support a goal which targets >>> >>> - All new projects should have the end user facing functionality fully exposed via the unified client >>> - Existing projects should aim to close the gap within 'N' cycles (N to be defined) >>> - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) >>> - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) >>> >>> The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. >>> >>> It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. >> ++ >> >> It's also worth noting that we're REALLY close to a 1.0 of openstacksdk >> (all the patches are in flight, we just need to land them) - and once >> we've got that we'll be in a position to start shifting >> python-openstackclient to using openstacksdk instead of python-*client. >> >> This will have the additional benefit that, once we've migrated CLIs to >> python-openstackclient as per this goal, and once we've migrated >> openstackclient itself to openstacksdk, the number of different >> libraries one needs to install to interact with openstack will be >> _dramatically_ lower. > Would it be useful to have the SDK work in OSC as a prerequisite to the > goal work? I would hate to have folks have to write a bunch of things > twice. > > Do we have any sort of list of which projects aren't currently being > handled by OSC? If we could get some help building such a list, that > would help us understand the scope of the work. I started documenting the compute API gaps in OSC last release [1]. It's a big gap and needs a lot of work, even for existing CLIs (the cold/live migration CLIs in OSC are a mess, and you can't even boot from volume where nova creates the volume for you). That's also why I put something into the etherpad about the OSC core team even being able to handle an onslaught of changes for a goal like this. > > As far as admin features, I think we've been hesitant to add those to > OSC in the past, but I can see the value. I wonder if having them in a > separate library makes sense? Or is it better to have commands in the > tool that regular users can't access, and just report the permission > error when they try to run the command? I thought the same, and we talked about this at the Austin summit, but OSC is inconsistent about this (you can live migrate a server but you can't evacuate it - there is no CLI for evacuation). It also came up at the Stein PTG with Dean in the nova room giving us some direction. [2] I believe the summary of that discussion was: a) to deal with the core team sprawl, we could move the compute stuff out of python-openstackclient and into an osc-compute plugin (like the osc-placement plugin for the placement service); then we could create a new core team which would have python-openstackclient-core as a superset b) Dean suggested that we close the compute API gaps in the SDK first, but that could take a long time as well...but it sounded like we could use the SDK for things that existed in the SDK and use novaclient for things that didn't yet exist in the SDK This might be a candidate for one of these multi-release goals that the TC started talking about at the Stein PTG. I could see something like this being a goal for Stein: "Each project owns its own osc- plugin for OSC CLIs" That deals with the core team and sprawl issue, especially with stevemar being gone and dtroyer being distracted by shiny x-men bird related things. That also seems relatively manageable for all projects to do in a single release. Having a single-release goal of "close all gaps across all service types" is going to be extremely tough for any older projects that had CLIs before OSC was created (nova/cinder/glance/keystone). For newer projects, like placement, it's not a problem because they never created any other CLI outside of OSC. [1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc [2] https://etherpad.openstack.org/p/nova-ptg-stein (~L721) -- Thanks, Matt From dtroyer at gmail.com Wed Sep 26 21:12:15 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 26 Sep 2018 16:12:15 -0500 Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: On Wed, Sep 26, 2018 at 3:01 PM, Doug Hellmann wrote: > Would it be useful to have the SDK work in OSC as a prerequisite to the > goal work? I would hate to have folks have to write a bunch of things > twice. I don't think this is necessary, once we have the auth and service discovery/version negotiation plumbing in OSC properly new things can be done in OSC without having to wait for conversion. Any of the existing client libs that can utilize an adapter form the SDK makes this even simpler for conversion. > Do we have any sort of list of which projects aren't currently being > handled by OSC? If we could get some help building such a list, that > would help us understand the scope of the work. We have asked plugins to maintain their presence in the OSC docs [0], there are three listed there as not having plugins but I wouldn't consider that exhaustive. We also ask them to list their resource names in [1] to reserve the name to help prevent name collisions. > As far as admin features, I think we've been hesitant to add those to > OSC in the past, but I can see the value. I wonder if having them in a > separate library makes sense? Or is it better to have commands in the > tool that regular users can't access, and just report the permission > error when they try to run the command? The admin/non-admin distinction has not been a hard rule in most places, we have plenty of admin commands in OSC. At times we have talked about pulling those out of the OSC repo into an admin plugin, I haven't encouraged that as I am not convinced of the value enough to put aside other things to do it. Due to configurable policy it also is not clear what to include and exclude, to me it is a better user experience, and more interoperable between cloud deployments, to correctly handle when admin/policy refuses to do something and let the user sort it out as necessary. [0] https://docs.openstack.org/python-openstackclient/latest/contributor/plugins.html#adoption [1] https://docs.openstack.org/python-openstackclient/latest/cli/commands.html#plugin-objects dt -- Dean Troyer dtroyer at gmail.com From dtroyer at gmail.com Wed Sep 26 21:33:27 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 26 Sep 2018 16:33:27 -0500 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> Message-ID: On Wed, Sep 26, 2018 at 3:44 PM, Matt Riedemann wrote: > I started documenting the compute API gaps in OSC last release [1]. It's a > big gap and needs a lot of work, even for existing CLIs (the cold/live > migration CLIs in OSC are a mess, and you can't even boot from volume where > nova creates the volume for you). That's also why I put something into the > etherpad about the OSC core team even being able to handle an onslaught of > changes for a goal like this. The OSC core team is very thin, yes, it seems as though companies don't like to spend money on client-facing things...I'll be in the hall following this thread should anyone want to talk... The migration commands are a mess, mostly because I got them wrong to start with and we have only tried to patch it up, this is one area I think we need to wipe clean and fix properly. Yay! Major version release! > I thought the same, and we talked about this at the Austin summit, but OSC > is inconsistent about this (you can live migrate a server but you can't > evacuate it - there is no CLI for evacuation). It also came up at the Stein > PTG with Dean in the nova room giving us some direction. [2] I believe the > summary of that discussion was: > a) to deal with the core team sprawl, we could move the compute stuff out of > python-openstackclient and into an osc-compute plugin (like the > osc-placement plugin for the placement service); then we could create a new > core team which would have python-openstackclient-core as a superset This is not my first choice but is not terrible either... > b) Dean suggested that we close the compute API gaps in the SDK first, but > that could take a long time as well...but it sounded like we could use the > SDK for things that existed in the SDK and use novaclient for things that > didn't yet exist in the SDK Yup, this can be done in parallel. The unit of decision for use sdk vs use XXXclient lib is per-API call. If the client lib can use an SDK adapter/session it becomes even better. I think the priority for what to address first should be guided by complete gaps in coverage and the need for microversion-driven changes. > This might be a candidate for one of these multi-release goals that the TC > started talking about at the Stein PTG. I could see something like this > being a goal for Stein: > > "Each project owns its own osc- plugin for OSC CLIs" > > That deals with the core team and sprawl issue, especially with stevemar > being gone and dtroyer being distracted by shiny x-men bird related things. > That also seems relatively manageable for all projects to do in a single > release. Having a single-release goal of "close all gaps across all service > types" is going to be extremely tough for any older projects that had CLIs > before OSC was created (nova/cinder/glance/keystone). For newer projects, > like placement, it's not a problem because they never created any other CLI > outside of OSC. I think the major difficulty here is simply how to migrate users from today state to future state in a reasonable manner. If we could teach OSC how to handle the same command being defined in multiple plugins properly (hello entrypoints!) it could be much simpler as we could start creating the new plugins and switch as the new command implementations become available rather than having a hard cutover. Or maybe the definition of OSC v4 is as above and we just work at it until complete and cut over at the end. Note that the current APIs that are in-repo (Compute, Identity, Image, Network, Object, Volume) are all implemented using the plugin structure, OSC v4 could start as the breaking out of those without command changes (except new migration commands!) and then the plugins all re-write and update at their own tempo. Dang, did I just deconstruct my project? One thing I don't like about that is we just replace N client libs with N (or more) plugins now and the number of things a user must install doesn't go down. I would like to hear from anyone who deals with installing OSC if that is still a big deal or should I let go of that worry? dt -- Dean Troyer dtroyer at gmail.com From nickgrwl3 at gmail.com Wed Sep 26 22:08:16 2018 From: nickgrwl3 at gmail.com (Niket Agrawal) Date: Thu, 27 Sep 2018 00:08:16 +0200 Subject: [openstack-dev] Ryu integration with Openstack In-Reply-To: <418337EF-FF89-4B26-8EF7-B8C5CF752E24@redhat.com> References: <418337EF-FF89-4B26-8EF7-B8C5CF752E24@redhat.com> Message-ID: Hi, Thanks for your reply. Is there a way to access the code that is running in the app to see what is the logic implemented in the app? Regards, Niket On Wed, Sep 26, 2018 at 10:31 PM Slawomir Kaplonski wrote: > Hi, > > > Wiadomość napisana przez Niket Agrawal w dniu > 26.09.2018, o godz. 18:11: > > > > Hello, > > > > I have a question regarding the Ryu integration in Openstack. By > default, the openvswitch bridges (br-int, br-tun and br-ex) are registered > to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl > get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute > node. However there is a different instance of the same Ryu controller > running on the neutron gateway as well and the three openvswitch bridges > (br-int, br-tun and br-ex) are registered to this instance of Ryu > controller. If I stop neutron-openvswitch agent on the nova compute node, > the bridges there are no longer connected to the controller, but the > bridges in the neutron gateway continue to remain connected to the > controller. Only when I stop the neutron openvswitch agent in the neutron > gateway as well, the bridges there get disconnected. > > > > I'm unable to find where in the Openstack code I can access this > implementation, because I intend to make a few tweaks to this architecture > which is present currently. Also, I'd like to know which app is the Ryu SDN > controller running by default at the moment. I feel the information in the > code can help me find it too. > > Ryu app is started by neutron-openvswitch-agent in: > https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34 > Is it what You are looking for? > > > > > Regards, > > Niket > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Sep 26 22:10:46 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 26 Sep 2018 15:10:46 -0700 Subject: [openstack-dev] [nova] Stein PTG summary Message-ID: <03b8b23f-31b6-e5fa-675d-5a40fbab58b5@gmail.com> Hello everybody, I've written up a high level summary of the discussions we had at the PTG -- please feel free to reply to this thread to fill in anything I've missed. We used our PTG etherpad: https://etherpad.openstack.org/p/nova-ptg-stein as an agenda and each topic we discussed was filled in with agreements, todos, and action items during the discussion. Please check out the etherpad to find notes relevant to your topics of interest, and reach out to us on IRC in #openstack-nova, on this mailing list with the [nova] tag, or by email to me if you have any questions. Now, onto the high level summary: Rocky retrospective =================== We began Wednesday morning with a retro on the Rocky cycle and captured notes on this etherpad: https://etherpad.openstack.org/p/nova-rocky-retrospective The runways review process was seen as overall positive and helped get some blueprint implementations merged that had languished in previous cycles. We agreed to continue with the runways process as-is in Stein and use it for approved blueprints. We did note that we could do better at queuing important approved work into runways, such as placement-related efforts that were not added to runways last cycle. We discussed whether or not to move the spec freeze deadline back to milestone 1 (we used milestone 2 in Rocky). I have an action item to dig into whether or not the late breaking regressions we found at RC time: https://etherpad.openstack.org/p/nova-rocky-release-candidate-todo were related to the later spec freeze at milestone 2. The question we want to answer is: did a later spec freeze lead to implementations landing later and resulting in the late detection of regressions at release candidate time? Finally, we discussed a lot of things around project management, end-to-end themes for a cycle, and people generally not feeling they had clarity throughout the cycle about which efforts and blueprints were most important, aside from runways. We got a lot of work done in Rocky, but not as much of it materialized into user-facing features and improvements as it did in Queens. Last cycle, we had thought runways would capture what is a priority at any given time, but looking back, we determined it would be helpful if we still had over-arching goals/efforts/features written down for people to refer to throughout the cycle. We dove deeper into that discussion on Friday during the hour before lunch, where we came up with user-facing themes we aim to accomplish in the Stein cycle: https://etherpad.openstack.org/p/nova-ptg-stein-priorities Note that these are _not_ meant to preempt anything in runways, these are just 1) for my use as a project manager and 2) for everyone's use to keep a bigger picture of our goals for the cycle in their heads, to aid in their work and review outside of runways. Themes ====== With that, I'll briefly mention the themes we came up with for the cycle: * Compute nodes capable to upgrade and exist with nested resource providers for multiple GPU types * Multi-cell operational enhancements: resilience to "down" or poor-performing cells and cross-cell instance migration * Volume-backed user experience and API hardening: ability to specify volume type during boot-from-volume, detach/attach of root volume, and volume-backed rebuild These are the user-visible features and functionality we aim to deliver and we'll keep tabs on these efforts throughout the cycle to keep them making progress. Placement ========= As usual, we had a lot of discussions on placement-related topics, so I'll try to highlight the main things that stand out to me. Please see the "Placement" section of our PTG etherpad for all the details and additional topics we discussed. We discussed the regression in behavior that happened when we removed the Aggregate[Core|Ram|Disk]Filters from the scheduler filters -- these filters allowed operators to set overcommit allocation ratios per aggregate instead of per host. We agreed on the importance of restoring this functionality and hashed out a concrete plan, with two specs needed to move forward: https://review.openstack.org/552105 https://review.openstack.org/544683 The other standout discussions were around the placement extraction and closing the gaps in nested resource providers. For the placement extraction, we are focusing on full support of an upgrade from integrated placement => extracted placement, including assisting with making sure deployment tools like OpenStack-Ansible and TripleO are able to support the upgrade. For closing the gaps in nested resource providers, there are many parts to it that are documented on the aforementioned PTG etherpads. By closing the gaps with nested resource providers, we'll open the door for being able to support minimum bandwidth scheduling as well. Cells ===== On cells, the main discussions were around resiliency "down" and poor-performing cells and cross-cell migration. Please see the "Cells" section of our PTG etherpad for all the details and additional topics we discussed. Some multi-cell resiliency work was completed in Rocky and is continuing in-progress for Stein, so there are no surprises there. Based on discussion at the PTG, there's enough info to start work on the cross-cell migration functionality. "Cross-project Day" =================== We had all of our cross-project discussions with the Cinder, Cyborg, Neutron, and Ironic teams on Thursday. Please see the "Thursday" section of our etherpad for details of all topics discussed. With the Cinder team, we went over plans for volume-backed rebuild, improving the boot-from-volume experience by accepting volume type, and detach/attach of root volumes. We agreed to move forward with these features. This was also the start of a discussion around transfer of ownership of resources (volume/instance/port/etc) from one project/user to another. The current idea is to develop a tool that will do the database surgery correctly, instead of trying to implement ownership transfer APIs in each service and orchestrating them. More details on that are to come. With the Cyborg team, we focused on solidifying what Nova changes would be needed to integrate with Cyborg, and the Cyborg team is going to propose a Nova spec for those changes: https://etherpad.openstack.org/p/stein-ptg.cyborg-nova-new With the Neutron team, we had a demo of minimum bandwidth scheduling to kick things off. A link to a writeup about the demo is available here if you missed it: http://lists.openstack.org/pipermail/openstack-dev/2018-September/134957.html Afterward, we discussed heterogeneous (linuxbridge, ovs, etc) Neutron ML2 backends and the current inability to migrate an instance between them -- we thought we had gained the ability by way of leveraging the newest Neutron port binding API but it turns out there are still some gaps. We discussed minimum bandwidth scheduling and ownership transfer of a port. We quickly realized transferring a port from a non-shared network would be really complicated, so we suspect the more realistic use case for someone wanting to transfer an instance and its ports to another project/user would involve an instance on a shared network, in which case the transfer is just database surgery. With the Ironic team, we discussed the problem of Nova/Ironic powersync wherein an instance that had been powered off via the Nova API is turned on via IPMI by a maintenance engineer to perform maintenance, is turned back off by Nova, disrupting maintenance. We agreed that Ironic will leverage Nova's external events API to notify Nova when a node has been powered on and should be considered ON so that Nova will not try to shut it down. We also discussed the need for failure domains for nova-computes controlling subsets of Ironic nodes and agreed to implement it as a config option in the [ironic] section to specify an Ironic partition key and a list services with which a node should peer. We also discussed whether to deprecate the ComputeCapabilities filter and we agreed to deprecate it. But, judging from the ML thread about it: http://lists.openstack.org/pipermail/openstack-dev/2018-September/135059.html I'm not sure it's appropriate to deprecate yet. Tech Debt and Miscellaneous Topic Day ===================================== Friday was our day for discussing topics from the "Tech Debt/Project Management" and "Miscellaneous" sections of our PTG etherpad. Please see the etherpad for all the notes taken on those discussions. The major topics that stand out to me were the proposal to move to Keystone unified limits and filling in gaps in openstackclient (OSC) for support of newer compute API microversions and achieving parity with novaclient. Example: migrations and boot-from-volume work differently between openstackclient and novaclient. The support of OSC is coming up on the ML now as a prospective community-wide goal for the T series: http://lists.openstack.org/pipermail/openstack-dev/2018-September/135107.html On unified limits, we agreed we should migrate to unified limits, noting that I think we must wait for a few more oslo.limit changes to land first. We agreed to drop per user limits on resources when we move to unified limits. This means that we will no longer allow setting a limit on a resource for a particular user -- only for a particular project. Note that with unified limits, we will gain the ability to have strict two-level hierarchy, which should address the reasons why admins leverage per user limits, at present. We will signal the upcoming change with a 'nova-status upgrade check'. And we're freezing all other quota-related features until we integrate with unified limits. I think that's about it for the "summary" which has gotten pretty long here. Find us on IRC in #openstack-nova or email us on this mailing list with the [nova] tag if you have any questions about any discussions from the PTG. Cheers, -melanie From doug at doughellmann.com Wed Sep 26 22:29:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 26 Sep 2018 18:29:11 -0400 Subject: [openstack-dev] [goal][python3] week 7 update In-Reply-To: References: Message-ID: Doug Hellmann writes: > == Things We Learned This Week == > > When we updated the tox.ini settings for jobs like pep8 and release > notes early in the Rocky session we only touched some of the official > repositories. I'll be working on making a list of the ones we missed so > we can update them by the end of Stein. I see quite a few repositories with tox settings out of date (about 350, see below). Given the volume, I'm going to prepare the patches and propose them a few at a time over the next couple of weeks. As background, each repo needs a patch (to master only) that looks like [1]. It needs to set the "basepython" parameter in all of the relevant tox environments to "python3" to force using python 3. It is most important to set the docs, linters, pep8, releasenotes, lower-constraints and venv environments, but we also wanted to include bindep and cover if they are present. The patches I prepare will update all of those environments. We should also include any other environments that run jobs, but teams may want to duplicate some (and add the relevant jobs) rather than changing all of the functional test jobs. As with the other functional job changes, I will leave those up to the project teams. As the commit message on [1] explains, we are using "python3" on purpose: * We do not want to specify a minor version number, because we do not want to have to update the file every time we upgrade python. * We do not want to set the override once in testenv, because that breaks the more specific versions used in default environments like py35 and py36 (at least under older versions of tox). In case you want to watch for them, all of the new patches will use "fix tox python3 overrides" as the first line of the commit message (the tracking tool looks for that string). Doug [1] https://review.openstack.org/#/c/573355/ From sylvain.bauza at gmail.com Wed Sep 26 22:30:25 2018 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Thu, 27 Sep 2018 00:30:25 +0200 Subject: [openstack-dev] [nova] Stein PTG summary In-Reply-To: <03b8b23f-31b6-e5fa-675d-5a40fbab58b5@gmail.com> References: <03b8b23f-31b6-e5fa-675d-5a40fbab58b5@gmail.com> Message-ID: Thanks for the recap email, Mel. Just a question inline for all the people that were in the room by Wednesday. Le jeu. 27 sept. 2018 à 00:10, melanie witt a écrit : > Hello everybody, > > I've written up a high level summary of the discussions we had at the > PTG -- please feel free to reply to this thread to fill in anything I've > missed. > > We used our PTG etherpad: > > https://etherpad.openstack.org/p/nova-ptg-stein > > as an agenda and each topic we discussed was filled in with agreements, > todos, and action items during the discussion. Please check out the > etherpad to find notes relevant to your topics of interest, and reach > out to us on IRC in #openstack-nova, on this mailing list with the > [nova] tag, or by email to me if you have any questions. > > Now, onto the high level summary: > > Rocky retrospective > =================== > We began Wednesday morning with a retro on the Rocky cycle and captured > notes on this etherpad: > > https://etherpad.openstack.org/p/nova-rocky-retrospective > > The runways review process was seen as overall positive and helped get > some blueprint implementations merged that had languished in previous > cycles. We agreed to continue with the runways process as-is in Stein > and use it for approved blueprints. We did note that we could do better > at queuing important approved work into runways, such as > placement-related efforts that were not added to runways last cycle. > > We discussed whether or not to move the spec freeze deadline back to > milestone 1 (we used milestone 2 in Rocky). I have an action item to dig > into whether or not the late breaking regressions we found at RC time: > > https://etherpad.openstack.org/p/nova-rocky-release-candidate-todo > > were related to the later spec freeze at milestone 2. The question we > want to answer is: did a later spec freeze lead to implementations > landing later and resulting in the late detection of regressions at > release candidate time? > > Finally, we discussed a lot of things around project management, > end-to-end themes for a cycle, and people generally not feeling they had > clarity throughout the cycle about which efforts and blueprints were > most important, aside from runways. We got a lot of work done in Rocky, > but not as much of it materialized into user-facing features and > improvements as it did in Queens. Last cycle, we had thought runways > would capture what is a priority at any given time, but looking back, we > determined it would be helpful if we still had over-arching > goals/efforts/features written down for people to refer to throughout > the cycle. We dove deeper into that discussion on Friday during the hour > before lunch, where we came up with user-facing themes we aim to > accomplish in the Stein cycle: > > https://etherpad.openstack.org/p/nova-ptg-stein-priorities > > Note that these are _not_ meant to preempt anything in runways, these > are just 1) for my use as a project manager and 2) for everyone's use to > keep a bigger picture of our goals for the cycle in their heads, to aid > in their work and review outside of runways. > > Themes > ====== > With that, I'll briefly mention the themes we came up with for the cycle: > > * Compute nodes capable to upgrade and exist with nested resource > providers for multiple GPU types > > * Multi-cell operational enhancements: resilience to "down" or > poor-performing cells and cross-cell instance migration > > * Volume-backed user experience and API hardening: ability to specify > volume type during boot-from-volume, detach/attach of root volume, and > volume-backed rebuild > > These are the user-visible features and functionality we aim to deliver > and we'll keep tabs on these efforts throughout the cycle to keep them > making progress. > > Placement > ========= > As usual, we had a lot of discussions on placement-related topics, so > I'll try to highlight the main things that stand out to me. Please see > the "Placement" section of our PTG etherpad for all the details and > additional topics we discussed. > > We discussed the regression in behavior that happened when we removed > the Aggregate[Core|Ram|Disk]Filters from the scheduler filters -- these > filters allowed operators to set overcommit allocation ratios per > aggregate instead of per host. We agreed on the importance of restoring > this functionality and hashed out a concrete plan, with two specs needed > to move forward: > > https://review.openstack.org/552105 > https://review.openstack.org/544683 > > The other standout discussions were around the placement extraction and > closing the gaps in nested resource providers. For the placement > extraction, we are focusing on full support of an upgrade from > integrated placement => extracted placement, including assisting with > making sure deployment tools like OpenStack-Ansible and TripleO are able > to support the upgrade. For closing the gaps in nested resource > providers, there are many parts to it that are documented on the > aforementioned PTG etherpads. By closing the gaps with nested resource > providers, we'll open the door for being able to support minimum > bandwidth scheduling as well. > > So, during this day, we also discussed about NUMA affinity and we said that we could possibly use nested resource providers for NUMA cells in Stein, but given we don't have yet a specific Placement API query, NUMA affinity should still be using the NUMATopologyFilter. That said, when looking about how to use this filter for vGPUs, it looks to me that I'd need to provide a new version for the NUMACell object and modify the virt.hardware module. Are we also accepting this (given it's a temporary question), or should we need to wait for the Placement API support ? Folks, what are you thoughts ? -Sylvain > Cells > ===== > On cells, the main discussions were around resiliency "down" and > poor-performing cells and cross-cell migration. Please see the "Cells" > section of our PTG etherpad for all the details and additional topics we > discussed. > > Some multi-cell resiliency work was completed in Rocky and is continuing > in-progress for Stein, so there are no surprises there. Based on > discussion at the PTG, there's enough info to start work on the > cross-cell migration functionality. > > "Cross-project Day" > =================== > We had all of our cross-project discussions with the Cinder, Cyborg, > Neutron, and Ironic teams on Thursday. Please see the "Thursday" section > of our etherpad for details of all topics discussed. > > With the Cinder team, we went over plans for volume-backed rebuild, > improving the boot-from-volume experience by accepting volume type, and > detach/attach of root volumes. We agreed to move forward with these > features. This was also the start of a discussion around transfer of > ownership of resources (volume/instance/port/etc) from one project/user > to another. The current idea is to develop a tool that will do the > database surgery correctly, instead of trying to implement ownership > transfer APIs in each service and orchestrating them. More details on > that are to come. > > With the Cyborg team, we focused on solidifying what Nova changes would > be needed to integrate with Cyborg, and the Cyborg team is going to > propose a Nova spec for those changes: > > https://etherpad.openstack.org/p/stein-ptg.cyborg-nova-new > > With the Neutron team, we had a demo of minimum bandwidth scheduling to > kick things off. A link to a writeup about the demo is available here if > you missed it: > > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134957.html > > Afterward, we discussed heterogeneous (linuxbridge, ovs, etc) Neutron > ML2 backends and the current inability to migrate an instance between > them -- we thought we had gained the ability by way of leveraging the > newest Neutron port binding API but it turns out there are still some > gaps. We discussed minimum bandwidth scheduling and ownership transfer > of a port. We quickly realized transferring a port from a non-shared > network would be really complicated, so we suspect the more realistic > use case for someone wanting to transfer an instance and its ports to > another project/user would involve an instance on a shared network, in > which case the transfer is just database surgery. > > With the Ironic team, we discussed the problem of Nova/Ironic powersync > wherein an instance that had been powered off via the Nova API is turned > on via IPMI by a maintenance engineer to perform maintenance, is turned > back off by Nova, disrupting maintenance. We agreed that Ironic will > leverage Nova's external events API to notify Nova when a node has been > powered on and should be considered ON so that Nova will not try to shut > it down. We also discussed the need for failure domains for > nova-computes controlling subsets of Ironic nodes and agreed to > implement it as a config option in the [ironic] section to specify an > Ironic partition key and a list services with which a node should peer. > We also discussed whether to deprecate the ComputeCapabilities filter > and we agreed to deprecate it. But, judging from the ML thread about it: > > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/135059.html > > I'm not sure it's appropriate to deprecate yet. > > Tech Debt and Miscellaneous Topic Day > ===================================== > Friday was our day for discussing topics from the "Tech Debt/Project > Management" and "Miscellaneous" sections of our PTG etherpad. Please see > the etherpad for all the notes taken on those discussions. > > The major topics that stand out to me were the proposal to move to > Keystone unified limits and filling in gaps in openstackclient (OSC) for > support of newer compute API microversions and achieving parity with > novaclient. Example: migrations and boot-from-volume work differently > between openstackclient and novaclient. The support of OSC is coming up > on the ML now as a prospective community-wide goal for the T series: > > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/135107.html > > On unified limits, we agreed we should migrate to unified limits, noting > that I think we must wait for a few more oslo.limit changes to land > first. We agreed to drop per user limits on resources when we move to > unified limits. This means that we will no longer allow setting a limit > on a resource for a particular user -- only for a particular project. > Note that with unified limits, we will gain the ability to have strict > two-level hierarchy, which should address the reasons why admins > leverage per user limits, at present. We will signal the upcoming change > with a 'nova-status upgrade check'. And we're freezing all other > quota-related features until we integrate with unified limits. > > I think that's about it for the "summary" which has gotten pretty long > here. Find us on IRC in #openstack-nova or email us on this mailing list > with the [nova] tag if you have any questions about any discussions from > the PTG. > > Cheers, > -melanie > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rochelle.grober at huawei.com Wed Sep 26 23:17:58 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Wed, 26 Sep 2018 23:17:58 +0000 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch>, Message-ID: 1B24E0FB-005A-4A86-AF27-6659D912A07F Oh, very definitely +1000 -------------------------------------------------- Rochelle Grober Rochelle Grober M: +1-6508889722(preferred) E: rochelle.grober at huawei.com 2012实验室-硅谷研究所技术规划及合作部 2012 Laboratories-Silicon Valley Technology Planning & Cooperation,Silicon Valley Research Center From:Mathieu Gagné To:openstack-sigs at lists.openstack.org, Cc:OpenStack Development Mailing List (not for usage questions),OpenStack Operators, Date:2018-09-26 12:41:24 Subject:Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series +1 Yes please! -- Mathieu On Wed, Sep 26, 2018 at 2:56 PM Tim Bell wrote: > > > Doug, > > Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. > > To give it some context and the motivation: > > At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). > > One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. > > I would strongly support a goal which targets > > - All new projects should have the end user facing functionality fully exposed via the unified client > - Existing projects should aim to close the gap within 'N' cycles (N to be defined) > - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) > - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) > > The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. > > It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. > > Tim > > -----Original Message----- > From: Doug Hellmann > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 26 September 2018 at 18:00 > To: openstack-dev , openstack-operators , openstack-sigs > Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series > > It's time to start thinking about community-wide goals for the T series. > > We use community-wide goals to achieve visible common changes, push for > basic levels of consistency and user experience, and efficiently improve > certain areas where technical debt payments have become too high - > across all OpenStack projects. Community input is important to ensure > that the TC makes good decisions about the goals. We need to consider > the timing, cycle length, priority, and feasibility of the suggested > goals. > > If you are interested in proposing a goal, please make sure that before > the summit it is described in the tracking etherpad [1] and that you > have started a mailing list thread on the openstack-dev list about the > proposal so that everyone in the forum session [2] has an opportunity to > consider the details. The forum session is only one step in the > selection process. See [3] for more details. > > Doug > > [1] https://etherpad.openstack.org/p/community-goals > [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 > [3] https://governance.openstack.org/tc/goals/index.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs _______________________________________________ openstack-sigs mailing list openstack-sigs at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Wed Sep 26 23:18:15 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 26 Sep 2018 18:18:15 -0500 Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: On 09/26/2018 04:12 PM, Dean Troyer wrote: > On Wed, Sep 26, 2018 at 3:01 PM, Doug Hellmann wrote: >> Would it be useful to have the SDK work in OSC as a prerequisite to the >> goal work? I would hate to have folks have to write a bunch of things >> twice. > > I don't think this is necessary, once we have the auth and service > discovery/version negotiation plumbing in OSC properly new things can > be done in OSC without having to wait for conversion. Any of the > existing client libs that can utilize an adapter form the SDK makes > this even simpler for conversion. As one might expect, I agree with Dean. I don't think we need to wait on it. From mordred at inaugust.com Wed Sep 26 23:45:29 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 26 Sep 2018 18:45:29 -0500 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> Message-ID: <1a886e1e-07d7-5326-6a9f-3203367a20a8@inaugust.com> On 09/26/2018 04:33 PM, Dean Troyer wrote: > On Wed, Sep 26, 2018 at 3:44 PM, Matt Riedemann wrote: >> I started documenting the compute API gaps in OSC last release [1]. It's a >> big gap and needs a lot of work, even for existing CLIs (the cold/live >> migration CLIs in OSC are a mess, and you can't even boot from volume where >> nova creates the volume for you). That's also why I put something into the >> etherpad about the OSC core team even being able to handle an onslaught of >> changes for a goal like this. > > The OSC core team is very thin, yes, it seems as though companies > don't like to spend money on client-facing things...I'll be in the > hall following this thread should anyone want to talk... > > The migration commands are a mess, mostly because I got them wrong to > start with and we have only tried to patch it up, this is one area I > think we need to wipe clean and fix properly. Yay! Major version > release! > >> I thought the same, and we talked about this at the Austin summit, but OSC >> is inconsistent about this (you can live migrate a server but you can't >> evacuate it - there is no CLI for evacuation). It also came up at the Stein >> PTG with Dean in the nova room giving us some direction. [2] I believe the >> summary of that discussion was: > >> a) to deal with the core team sprawl, we could move the compute stuff out of >> python-openstackclient and into an osc-compute plugin (like the >> osc-placement plugin for the placement service); then we could create a new >> core team which would have python-openstackclient-core as a superset > > This is not my first choice but is not terrible either... > >> b) Dean suggested that we close the compute API gaps in the SDK first, but >> that could take a long time as well...but it sounded like we could use the >> SDK for things that existed in the SDK and use novaclient for things that >> didn't yet exist in the SDK > > Yup, this can be done in parallel. The unit of decision for use sdk > vs use XXXclient lib is per-API call. If the client lib can use an > SDK adapter/session it becomes even better. I think the priority for > what to address first should be guided by complete gaps in coverage > and the need for microversion-driven changes. > >> This might be a candidate for one of these multi-release goals that the TC >> started talking about at the Stein PTG. I could see something like this >> being a goal for Stein: >> >> "Each project owns its own osc- plugin for OSC CLIs" >> >> That deals with the core team and sprawl issue, especially with stevemar >> being gone and dtroyer being distracted by shiny x-men bird related things. >> That also seems relatively manageable for all projects to do in a single >> release. Having a single-release goal of "close all gaps across all service >> types" is going to be extremely tough for any older projects that had CLIs >> before OSC was created (nova/cinder/glance/keystone). For newer projects, >> like placement, it's not a problem because they never created any other CLI >> outside of OSC. > > I think the major difficulty here is simply how to migrate users from > today state to future state in a reasonable manner. If we could teach > OSC how to handle the same command being defined in multiple plugins > properly (hello entrypoints!) it could be much simpler as we could > start creating the new plugins and switch as the new command > implementations become available rather than having a hard cutover. > > Or maybe the definition of OSC v4 is as above and we just work at it > until complete and cut over at the end. I think that sounds pretty good, actually. We can also put the 'just get the sdk Connection' code in. You mentioned earlier that python-*client that can take an existing ksa Adapter as a constructor parameter make this easier - maybe let's put that down as a workitem for this? Becuase if we could do that- then we know we've got discovery and config working consistently across the board no matter if a call is using sdk or python-*client primitives under the cover - so everything will respond to env vars and command line options and clouds.yaml consistently. For that to work, a python-*client Client that took an keystoneauth1.adapter.Adapter would need to take it as gospel and not do further processing of config, otherwise the point is defeated. But it should be straightforward to do in most cases, yeah? > Note that the current APIs > that are in-repo (Compute, Identity, Image, Network, Object, Volume) > are all implemented using the plugin structure, OSC v4 could start as > the breaking out of those without command changes (except new > migration commands!) and then the plugins all re-write and update at > their own tempo. Dang, did I just deconstruct my project? Main difference is making sure these new deconstructed plugin teams understand the client support lifecycle - which is that we don't drop support for old versions of services in OSC (or SDK). It's a shift from the support lifecycle and POV of python-*client, but it's important and we just need to all be on the same page. > One thing I don't like about that is we just replace N client libs > with N (or more) plugins now and the number of things a user must > install doesn't go down. I would like to hear from anyone who deals > with installing OSC if that is still a big deal or should I let go of > that worry? I think it's a worry, although not AS big a worry. With independent plugin repos and deliverables, there's a pile for pip to install, but the plugins are tiny. There is also a new set of packages for the distro maintainers- but maybe still not terrible. It's still better than python-*client because python-*client pulls in so many transitive deepens because of all of the different ways the various clients decided to solve the world. From persia at shipstone.jp Thu Sep 27 00:27:33 2018 From: persia at shipstone.jp (Emmet Hikory) Date: Thu, 27 Sep 2018 09:27:33 +0900 Subject: [openstack-dev] Last day for TC voting Message-ID: <20180927002733.GA19144@shipstone.jp> We are coming down to the last hours for voting in the TC election. Voting ends Sep 27, 2018 23:45 UTC. Search your gerrit preferred email address [0] for the following subject: Poll: Stein TC Election That is your ballot and links you to the voting application. Please vote. If you have voted, please encourage your colleages to vote. Candidate statements are linked to the names of all confirmed candidates: https://governance.openstack.org/election/#stein-tc-candidates What to do if you don't see the email and have a commit in at least one of the official programs projects[1]: * check the trash of your gerrit Preferred Email address[0], in case it went into trash or spam * wait a bit and check again, in case your email server is a bit slow * find the sha of at least one commit from the program project repos[1] and email the election officials [2] If we can confirm that you are entitled to vote, we will add you to the voters list and you will be emailed a ballot. Please vote! Last time we checked, there were over 1200 unvoted ballots, so your vote can make a significant difference. [0] Sign into review.openstack.org: Go to Settings > Contact Information. Look at the email listed as your Preferred Email. That is where the ballot has been sent. [1] https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2018-elections [2] https://governance.openstack.org/election/#election-officials -- Emmet HIKORY From mriedemos at gmail.com Thu Sep 27 00:45:54 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 26 Sep 2018 19:45:54 -0500 Subject: [openstack-dev] [nova] Stein PTG summary In-Reply-To: References: <03b8b23f-31b6-e5fa-675d-5a40fbab58b5@gmail.com> Message-ID: On 9/26/2018 5:30 PM, Sylvain Bauza wrote: > So, during this day, we also discussed about NUMA affinity and we said > that we could possibly use nested resource providers for NUMA cells in > Stein, but given we don't have yet a specific Placement API query, NUMA > affinity should still be using the NUMATopologyFilter. > That said, when looking about how to use this filter for vGPUs, it looks > to me that I'd need to provide a new version for the NUMACell object and > modify the virt.hardware module. Are we also accepting this (given it's > a temporary question), or should we need to wait for the Placement API > support ? > > Folks, what are you thoughts ? I'm pretty sure we've said several times already that modeling NUMA in Placement is not something for which we're holding up the extraction. -- Thanks, Matt From wenranxiao at gmail.com Thu Sep 27 02:11:03 2018 From: wenranxiao at gmail.com (wenran xiao) Date: Thu, 27 Sep 2018 10:11:03 +0800 Subject: [openstack-dev] [neutron] subnet pool can not delete prefixes Message-ID: Relation bug: https://bugs.launchpad.net/neutron/+bug/1792901 Any suggestion is welcome! Cheers, -wenran -------------- next part -------------- An HTML attachment was scrubbed... URL: From tengqim at cn.ibm.com Thu Sep 27 02:27:39 2018 From: tengqim at cn.ibm.com (Qiming Teng) Date: Thu, 27 Sep 2018 02:27:39 +0000 Subject: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration In-Reply-To: References: Message-ID: <20180927022738.GA22304@rcp.sl.cloud9.ibm.com> Hi, Due to many reasons, I cannot join you on this event, but I do like to leave some comments here for references. On Tue, Sep 18, 2018 at 11:27:29AM +0800, Rico Lin wrote: > *TL;DR* > *How about a forum in Berlin for discussing autoscaling integration (as a > long-term goal) in OpenStack?* First of all, there is nothing called "auto-scaling" in my mind and "auto" is most of the time a scary word to users. It means the service or tool is hiding some details from the users when it is doing something without human intervention. There are cases where this can be useful, there are also many other cases the service or tool is messing up things to a state difficult to recover from. What matters most is the usage scenarios we support. I don't think users care that much how project teams are organized. > Hi all, as we start to discuss how can we join develop from Heat and Senlin > as we originally planned when we decided to fork Senlin from Heat long time > ago. > > IMO the biggest issues we got now are we got users using autoscaling in > both services, appears there is a lot of duplicated effort, and some great > enhancement didn't exist in another service. > As a long-term goal (from the beginning), we should try to join development > to sync functionality, and move users to use Senlin for autoscaling. So we > should start to review this goal, or at least we should try to discuss how > can we help users without break or enforce anything. The original plan, iirc, was to make sure Senlin resources are supported in Heat, and we will gradually fade out the existing 'AutoScalingGroup' and related resource types in Heat. I have no clue since when Heat is interested in "auto-scaling" again. > What will be great if we can build common library cross projects, and use > that common library in both projects, make sure we have all improvement > implemented in that library, finally to use Senlin from that from that > library call in Heat autoscaling group. And in long-term, we gonna let all > user use more general way instead of multiple ways but generate huge > confusing for users. The so called "auto-scaling" is always a solution, built by orchestrating many moving parts across the infrastructure. In some cases, you may have to install agents into VMs for workload metering. I am not convinced this can be done using a library approach. > *As an action, I propose we have a forum in Berlin and sync up all effort > from both teams to plan for idea scenario design. The forum submission [1] > ended at 9/26.* > Also would benefit from both teams to start to think about how they can > modulize those functionalities for easier integration in the future. > > From some Heat PTG sessions, we keep bring out ideas on how can we improve > current solutions for Autoscaling. We should start to talk about will it > make sense if we combine all group resources into one, and inherit from it > for other resources (ideally gonna deprecate rest resource types). Like we > can do Batch create/delete in Resource Group, but not in ASG. We definitely > got some unsynchronized works inner Heat, and cross Heat and Senlin. Totally agree with you on this. We should strive to minimize the technologies users have to master when they have a need. > Please let me know who is interesting in this idea, so we can work together > and reach our goal step by step. > Also please provide though if you got any concerns about this proposal. > > [1] https://www.openstack.org/summit/berlin-2018/call-for-presentations > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From haleyb.dev at gmail.com Thu Sep 27 02:40:06 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Wed, 26 Sep 2018 22:40:06 -0400 Subject: [openstack-dev] [neutron] subnet pool can not delete prefixes In-Reply-To: References: Message-ID: <1588607e-9358-5c9f-f9d9-e979db8982a0@gmail.com> On 09/26/2018 10:11 PM, wenran xiao wrote: > Relation bug: https://bugs.launchpad.net/neutron/+bug/1792901 > Any suggestion is welcome! Removing a prefix from a subnetpool is not supported, there was an inadvertent change to the client that made it seem possible. We are in the process of reverting it: https://review.openstack.org/#/c/599633/ -Brian From agarwalvishakha18 at gmail.com Thu Sep 27 03:09:51 2018 From: agarwalvishakha18 at gmail.com (vishakha agarwal) Date: Thu, 27 Sep 2018 08:39:51 +0530 Subject: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs In-Reply-To: <16618f03cef.125129a738977.7142710598346158978@ghanshyammann.com> References: <1537790427.1265517.1518561608.5261E953@webmail.messagingengine.com> <48f4ddf1-3d93-340e-3ad2-11bc4ef004ef@redhat.com> <1537868010.2456380.1519770432.11C4E0C5@webmail.messagingengine.com> <16618f03cef.125129a738977.7142710598346158978@ghanshyammann.com> Message-ID: > From : Colleen Murphy > To : > Date : Tue, 25 Sep 2018 18:33:30 +0900 > Subject : Re: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs > ============ Forwarded message ============ > > On Mon, Sep 24, 2018, at 8:40 PM, John Dennis wrote: > > > On 9/24/18 8:00 AM, Colleen Murphy wrote: > > > > This is in regard to https://launchpad.net/bugs/1641625 and the proposed patch https://review.openstack.org/588211 for it. Thanks Vishakha for getting the ball rolling. > > > > > > > > tl;dr: Keystone as an IdP should support sending non-strings/lists-of-strings as user attribute values, specifically lists of keystone groups, here's how that might happen. > > > > > > > > Problem statement: > > > > > > > > When keystone is set up as a service provider with an external non-keystone identity provider, it is common to configure the mapping rules to accept a list of group names from the IdP and map them to some property of a local keystone user, usually also a keystone group name. When keystone acts as the IdP, it's not currently possible to send a group name as a user property in the assertion. There are a few problems: > > > > > > > > 1. We haven't added any openstack_groups key in the creation of the SAML assertion (http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164). > > > > 2. If we did, this would not be enough. Unlike other IdPs, in keystone there can be multiple groups with the same name, namespaced by domain. So it's not enough for the SAML AttributeStatement to contain a semi-colon-separated list of group names, since a user could theoretically be a member of two or more groups with the same name. > > > > * Why can't we just send group IDs, which are unique? Because two different keystones are not going to have independent groups with the same UUID, so we cannot possibly map an ID of a group from keystone A to the ID of a different group in keystone B. We could map the ID of the group in in A to the name of a group in B but then operators need to create groups with UUIDs as names which is a little awkward for both the operator and the user who now is a member of groups with nondescriptive names. > > > > 3. If we then were able to encode a complex type like a group dict in a SAML assertion, we'd have to deal with it on the service provider side by being able to parse such an environment variable from the Apache headers. > > > > 4. The current mapping rules engine uses basic python string formatting to translate remote key-value pairs to local rules. We would need to change the mapping API to work with values more complex than strings and lists of strings. > > > > > > > > Possible solution: > > > > > > > > Vishakha's patch (https://review.openstack.org/588211) starts to solve (1) but it doesn't go far enough to solve (2-4). What we talked about at the PTG was: > > > > > > > > 2. Encode the group+domain as a string, for example by using the dict string repr or a string representation of some custom XML and maybe base64 encoding it. > > > > * It's not totally clear whether the AttributeValue class of the pysaml2 library supports any data types outside of the xmlns:xs namespace or whether nested XML is an option, so encoding the whole thing as an xs:string seems like the simplest solution. > > > > 3. The SP will have to be aware that openstack_groups is a special key that needs the encoding reversed. > > > > * I wrote down "MultiDict" in my notes but I don't recall exactly what format the environment variable would take that would make a MultiDict make sense here, in any case I think encoding the whole thing as a string eliminates the need for this. > > > > 4. We didn't talk about the mapping API, but here's what I think. If we were just talking about group names, the mapping API today would work like this (slight oversimplification for brevity): > > > > > > > > Given a list of openstack_groups like ["A", "B", "C"], it would work like this: > > > > > > > > [ > > > > { > > > > "local": > > > > [ > > > > { > > > > "group": > > > > { > > > > "name": "{0}", > > > > "domain": > > > > { > > > > "name": "federated_domain" > > > > } > > > > } > > > > } > > > > ], "remote": > > > > [ > > > > { > > > > "type": "openstack_groups" > > > > } > > > > ] > > > > } > > > > ] > > > > (paste in case the spacing makes this unreadable: http://paste.openstack.org/show/730623/ ) > > > > > > > > But now, we no longer have a list of strings but something more like [{"name": "A", "domain_name": "Default"} {"name": "B", "domain_name": "Default", "name": "A", "domain_name": "domainB"}]. Since {0} isn't a string, this example doesn't really work. Instead, let's assume that in step (3) we converted the decoded AttributeValue text to an object. Then the mapping could look more like this: > > > > > > > > [ > > > > { > > > > "local": > > > > [ > > > > { > > > > "group": > > > > { > > > > "name": "{0.name}", > > > > "domain": > > > > { > > > > "name": "{0.domain_name}" > > > > } > > > > } > > > > } > > > > ], "remote": > > > > [ > > > > { > > > > "type": "openstack_groups" > > > > } > > > > ] > > > > } > > > > ] > > > > (paste: http://paste.openstack.org/show/730622/ ) > > > > > > > > Alternatively, we could forget about the namespacing problem and simply say we only pass group names in the assertion, and if you have ambiguous group names you're on your own. We could also try to support both, e.g. have an openstack_groups mean a list of group names for simpler use cases, and openstack_groups_unique mean the list of encoded group+domain strings for advanced use cases. > > > > > > > > Finally, whatever we decide for groups we should also apply to openstack_roles which currently only supports global roles and not domain-specific roles. > > > > > > > > (It's also worth noting, for clarity, that the samlize function does handle namespaced projects, but this is because it's retrieving the project from the token and therefore there is only ever one project and one project domain so there is no ambiguity.) > > > > > > > > > > A few thoughts to help focus the discussion: > > > > > > * Namespacing is critical, no design should be permitted which allows > > > for ambiguous names. Ambiguous names are a security issue and can be > > > used by an attacker. The SAML designers recognized the importance to > > > disambiguate names. In SAML names are conveyed inside a NameIdentifier > > > element which (optionally) includes "name qualifier" attributes which in > > > SAML lingo is a namespace name. > > > > > > * SAML does not define the format of an attribute value. You can use > > > anything you want as long as it can be expressed in valid XML as long as > > > the cooperating parties know how to interpret the XML content. But > > > herein lies the problem. Very few SAML implementations know how to > > > consume an attribute value other than a string. In the real world, > > > despite what the SAML spec says is permitted is the constraint attribute > > > values is a string. > > > > > > * I haven't looked at the pysaml implementation but I'd be surprised if > > > it treated attribute values as anything other than a string. In theory > > > it could take any Python object (or JSON) and serialize it into XML but > > > you would still be stuck with the receiver being unable to parse the > > > attribute value (see above point). > > > > > > * You can encode complex data in an attribute value while only using a > > > simple string. The only requirement is the relying party knowing how to > > > interpret the string value. Note, this is distinctly different than > > > using non-string attribute values because of who is responsible for > > > parsing the value. If you use a non-string attribute value the SAML > > > library need to know how to parse it, none or very few will know how to > > > process that element. But if it's a string value the SAML library will > > > happily pass that string back up to the application who can then > > > interpret it. The easiest way to embed complex data in a string is with > > > JSON, we do it all the time, all over the place in OpenStack. [1][2] > > > > > > So my suggestion would be to give the attribute a meaningful name. > > > Define a JSON schema for the data and then let the upper layers decode > > > the JSON and operate on it. This is no different than any other SAML > > > attribute passed as a string, the receive MUST know how to interpret the > > > string value. > > > > > > [1] We already pass complex data in a SAML attribute string value. We > > > permit a comma separated list of group names to appear in the 'groups' > > > mapping rule (although I don't think this feature is documented in our > > > mapping rules documentation). The receiver (our mapping engine) has > > > hard-coded logic to look for a list of names. > > > > > > [2] We might want to prepend a format specifier to string containing > > > complex data, e.g. "JSON:{json object}". Our parser could then look for > > > a leading format tag and if if finds one strip it off and pass the rest > > > of the string into the proper parser. > > > > > > -- > > > John Dennis > > > > > > > Thanks for this response, John. I think serializing to JSON and prepending a format specifier makes sense. > > > > Colleen Thanks for the response Colleen, John. After reading the above mails, what I understood is to pass list of group names specific to a keystone in to a JSON schema prepending a tag. e.g - groups_names = "JSON:{groups_names:[a,b,c]}". IN SAML assertion Attribute_name can be openstack_groups and Attribute_value will be "JSON:{groups_names:[a,b,c]}". Vishakha > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > From renat.akhmerov at gmail.com Thu Sep 27 06:29:07 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Thu, 27 Sep 2018 13:29:07 +0700 Subject: [openstack-dev] [mistral] Extend created(updated)_at by started(finished)_at to clarify the duration of the task In-Reply-To: References: Message-ID: <87873a83-9b54-4586-8d4b-1fa88a219755@Spark> Hi Oleg, I looked at the blueprint. Looks good to me, I understand the motivation behind. The fact that we use created_at and updated_at now to see the duration of the task is often confusing, I agree. So this would be a good addition and it is backward compatible. The only subtle thing is that when you make changes in CloudFlow we’ll have to make a note that since version X of CloudFlow (that will be using new fields to calculate durations) it will require Mistral Stein. Or, another option is to make it flexible: if those fields are present in the HTTP response, we can use them for calculation and if not, use the old way. Thanks Renat Akhmerov @Nokia On 26 Sep 2018, 18:02 +0700, Олег Овчарук , wrote: > Hi everyone! Please take a look to the blueprint that i've just created  https://blueprints.launchpad.net/mistral/+spec/mistral-add-started-finished-at > I'd like to implement this feature, also I want to update CloudFlow when this will be done. Please let me know in the blueprint if I can start implementing. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at citynetwork.eu Thu Sep 27 06:32:57 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Thu, 27 Sep 2018 08:32:57 +0200 Subject: [openstack-dev] [publiccloud-wg] Reminder weekly meeting Public Cloud WG Message-ID: Hi everyone, Time for a new meeting for PCWG - today (27th) 1400 UTC in #openstack-publiccloud! Agenda found at https://etherpad.openstack.org/p/publiccloud-wg We will again have a short brief from the PTG for those of you that missed that last week. Also, time to start planning for the upcoming summit - forum sessions submitted etc. Another important item on the agenda is the prio/ranking of our "missing features" list. We have identified a few cross project goals already that we see as important, but we need more operators to engage in this ranking. Talk to you later today! Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From skaplons at redhat.com Thu Sep 27 07:20:34 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Thu, 27 Sep 2018 09:20:34 +0200 Subject: [openstack-dev] Ryu integration with Openstack In-Reply-To: References: <418337EF-FF89-4B26-8EF7-B8C5CF752E24@redhat.com> Message-ID: <70907B1D-9D78-4004-9059-5F154FA6EDE9@redhat.com> Hi, Code of app is in https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py and classes for specific bridge types are in https://github.com/openstack/neutron/tree/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native > Wiadomość napisana przez Niket Agrawal w dniu 27.09.2018, o godz. 00:08: > > Hi, > > Thanks for your reply. Is there a way to access the code that is running in the app to see what is the logic implemented in the app? > > Regards, > Niket > > On Wed, Sep 26, 2018 at 10:31 PM Slawomir Kaplonski wrote: > Hi, > > > Wiadomość napisana przez Niket Agrawal w dniu 26.09.2018, o godz. 18:11: > > > > Hello, > > > > I have a question regarding the Ryu integration in Openstack. By default, the openvswitch bridges (br-int, br-tun and br-ex) are registered to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute node. However there is a different instance of the same Ryu controller running on the neutron gateway as well and the three openvswitch bridges (br-int, br-tun and br-ex) are registered to this instance of Ryu controller. If I stop neutron-openvswitch agent on the nova compute node, the bridges there are no longer connected to the controller, but the bridges in the neutron gateway continue to remain connected to the controller. Only when I stop the neutron openvswitch agent in the neutron gateway as well, the bridges there get disconnected. > > > > I'm unable to find where in the Openstack code I can access this implementation, because I intend to make a few tweaks to this architecture which is present currently. Also, I'd like to know which app is the Ryu SDN controller running by default at the moment. I feel the information in the code can help me find it too. > > Ryu app is started by neutron-openvswitch-agent in: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34 > Is it what You are looking for? > > > > > Regards, > > Niket > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From dangtrinhnt at gmail.com Thu Sep 27 07:45:58 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 27 Sep 2018 16:45:58 +0900 Subject: [openstack-dev] [Searchlight] Team meeting cancellation today Message-ID: Dear team, Because most of our cores are based in China and South Korea which are in the holiday season, and we had covered most of the things during our vPTG last week, we will cancel the team meeting this week. Next meeting will be held on 11 Oct 2018. Bests, *Trinh Nguyen *| Founder & Chief Architect *E:* dangtrinhnt at gmail.com | *W:* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Thu Sep 27 08:04:45 2018 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Thu, 27 Sep 2018 10:04:45 +0200 Subject: [openstack-dev] [docs][charms] Updating Deployment Guides indices of published pages Message-ID: Hello docs team, What would it take to re-generate the indices for Deployment Guides on the published Queens [0] and Rocky [1] docs pages? It seems that the charms project has missed the index for those releases due to some issues which now has been resolved [2]. We would love to reclaim our space in the list! 0: https://docs.openstack.org/queens/deploy/ 1: https://docs.openstack.org/rocky/deploy/ 2: https://review.openstack.org/#/q/topic:enable-openstack-manuals-rocky-latest+(status:open+OR+status:merged) -- Frode Nordahl -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Thu Sep 27 08:11:43 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 27 Sep 2018 10:11:43 +0200 Subject: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs In-Reply-To: References: <1537790427.1265517.1518561608.5261E953@webmail.messagingengine.com> <48f4ddf1-3d93-340e-3ad2-11bc4ef004ef@redhat.com> <1537868010.2456380.1519770432.11C4E0C5@webmail.messagingengine.com> <16618f03cef.125129a738977.7142710598346158978@ghanshyammann.com> Message-ID: <1538035903.1137963.1522329592.7CA61F29@webmail.messagingengine.com> On Thu, Sep 27, 2018, at 5:09 AM, vishakha agarwal wrote: > > From : Colleen Murphy > > To : > > Date : Tue, 25 Sep 2018 18:33:30 +0900 > > Subject : Re: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs > > ============ Forwarded message ============ > > > On Mon, Sep 24, 2018, at 8:40 PM, John Dennis wrote: > > > > On 9/24/18 8:00 AM, Colleen Murphy wrote: > > > > > This is in regard to https://launchpad.net/bugs/1641625 and the proposed patch https://review.openstack.org/588211 for it. Thanks Vishakha for getting the ball rolling. > > > > > > > > > > tl;dr: Keystone as an IdP should support sending non-strings/lists-of-strings as user attribute values, specifically lists of keystone groups, here's how that might happen. > > > > > > > > > > Problem statement: > > > > > > > > > > When keystone is set up as a service provider with an external non-keystone identity provider, it is common to configure the mapping rules to accept a list of group names from the IdP and map them to some property of a local keystone user, usually also a keystone group name. When keystone acts as the IdP, it's not currently possible to send a group name as a user property in the assertion. There are a few problems: > > > > > > > > > > 1. We haven't added any openstack_groups key in the creation of the SAML assertion (http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164). > > > > > 2. If we did, this would not be enough. Unlike other IdPs, in keystone there can be multiple groups with the same name, namespaced by domain. So it's not enough for the SAML AttributeStatement to contain a semi-colon-separated list of group names, since a user could theoretically be a member of two or more groups with the same name. > > > > > * Why can't we just send group IDs, which are unique? Because two different keystones are not going to have independent groups with the same UUID, so we cannot possibly map an ID of a group from keystone A to the ID of a different group in keystone B. We could map the ID of the group in in A to the name of a group in B but then operators need to create groups with UUIDs as names which is a little awkward for both the operator and the user who now is a member of groups with nondescriptive names. > > > > > 3. If we then were able to encode a complex type like a group dict in a SAML assertion, we'd have to deal with it on the service provider side by being able to parse such an environment variable from the Apache headers. > > > > > 4. The current mapping rules engine uses basic python string formatting to translate remote key-value pairs to local rules. We would need to change the mapping API to work with values more complex than strings and lists of strings. > > > > > > > > > > Possible solution: > > > > > > > > > > Vishakha's patch (https://review.openstack.org/588211) starts to solve (1) but it doesn't go far enough to solve (2-4). What we talked about at the PTG was: > > > > > > > > > > 2. Encode the group+domain as a string, for example by using the dict string repr or a string representation of some custom XML and maybe base64 encoding it. > > > > > * It's not totally clear whether the AttributeValue class of the pysaml2 library supports any data types outside of the xmlns:xs namespace or whether nested XML is an option, so encoding the whole thing as an xs:string seems like the simplest solution. > > > > > 3. The SP will have to be aware that openstack_groups is a special key that needs the encoding reversed. > > > > > * I wrote down "MultiDict" in my notes but I don't recall exactly what format the environment variable would take that would make a MultiDict make sense here, in any case I think encoding the whole thing as a string eliminates the need for this. > > > > > 4. We didn't talk about the mapping API, but here's what I think. If we were just talking about group names, the mapping API today would work like this (slight oversimplification for brevity): > > > > > > > > > > Given a list of openstack_groups like ["A", "B", "C"], it would work like this: > > > > > > > > > > [ > > > > > { > > > > > "local": > > > > > [ > > > > > { > > > > > "group": > > > > > { > > > > > "name": "{0}", > > > > > "domain": > > > > > { > > > > > "name": "federated_domain" > > > > > } > > > > > } > > > > > } > > > > > ], "remote": > > > > > [ > > > > > { > > > > > "type": "openstack_groups" > > > > > } > > > > > ] > > > > > } > > > > > ] > > > > > (paste in case the spacing makes this unreadable: http://paste.openstack.org/show/730623/ ) > > > > > > > > > > But now, we no longer have a list of strings but something more like [{"name": "A", "domain_name": "Default"} {"name": "B", "domain_name": "Default", "name": "A", "domain_name": "domainB"}]. Since {0} isn't a string, this example doesn't really work. Instead, let's assume that in step (3) we converted the decoded AttributeValue text to an object. Then the mapping could look more like this: > > > > > > > > > > [ > > > > > { > > > > > "local": > > > > > [ > > > > > { > > > > > "group": > > > > > { > > > > > "name": "{0.name}", > > > > > "domain": > > > > > { > > > > > "name": "{0.domain_name}" > > > > > } > > > > > } > > > > > } > > > > > ], "remote": > > > > > [ > > > > > { > > > > > "type": "openstack_groups" > > > > > } > > > > > ] > > > > > } > > > > > ] > > > > > (paste: http://paste.openstack.org/show/730622/ ) > > > > > > > > > > Alternatively, we could forget about the namespacing problem and simply say we only pass group names in the assertion, and if you have ambiguous group names you're on your own. We could also try to support both, e.g. have an openstack_groups mean a list of group names for simpler use cases, and openstack_groups_unique mean the list of encoded group+domain strings for advanced use cases. > > > > > > > > > > Finally, whatever we decide for groups we should also apply to openstack_roles which currently only supports global roles and not domain-specific roles. > > > > > > > > > > (It's also worth noting, for clarity, that the samlize function does handle namespaced projects, but this is because it's retrieving the project from the token and therefore there is only ever one project and one project domain so there is no ambiguity.) > > > > > > > > > > > > > A few thoughts to help focus the discussion: > > > > > > > > * Namespacing is critical, no design should be permitted which allows > > > > for ambiguous names. Ambiguous names are a security issue and can be > > > > used by an attacker. The SAML designers recognized the importance to > > > > disambiguate names. In SAML names are conveyed inside a NameIdentifier > > > > element which (optionally) includes "name qualifier" attributes which in > > > > SAML lingo is a namespace name. > > > > > > > > * SAML does not define the format of an attribute value. You can use > > > > anything you want as long as it can be expressed in valid XML as long as > > > > the cooperating parties know how to interpret the XML content. But > > > > herein lies the problem. Very few SAML implementations know how to > > > > consume an attribute value other than a string. In the real world, > > > > despite what the SAML spec says is permitted is the constraint attribute > > > > values is a string. > > > > > > > > * I haven't looked at the pysaml implementation but I'd be surprised if > > > > it treated attribute values as anything other than a string. In theory > > > > it could take any Python object (or JSON) and serialize it into XML but > > > > you would still be stuck with the receiver being unable to parse the > > > > attribute value (see above point). > > > > > > > > * You can encode complex data in an attribute value while only using a > > > > simple string. The only requirement is the relying party knowing how to > > > > interpret the string value. Note, this is distinctly different than > > > > using non-string attribute values because of who is responsible for > > > > parsing the value. If you use a non-string attribute value the SAML > > > > library need to know how to parse it, none or very few will know how to > > > > process that element. But if it's a string value the SAML library will > > > > happily pass that string back up to the application who can then > > > > interpret it. The easiest way to embed complex data in a string is with > > > > JSON, we do it all the time, all over the place in OpenStack. [1][2] > > > > > > > > So my suggestion would be to give the attribute a meaningful name. > > > > Define a JSON schema for the data and then let the upper layers decode > > > > the JSON and operate on it. This is no different than any other SAML > > > > attribute passed as a string, the receive MUST know how to interpret the > > > > string value. > > > > > > > > [1] We already pass complex data in a SAML attribute string value. We > > > > permit a comma separated list of group names to appear in the 'groups' > > > > mapping rule (although I don't think this feature is documented in our > > > > mapping rules documentation). The receiver (our mapping engine) has > > > > hard-coded logic to look for a list of names. > > > > > > > > [2] We might want to prepend a format specifier to string containing > > > > complex data, e.g. "JSON:{json object}". Our parser could then look for > > > > a leading format tag and if if finds one strip it off and pass the rest > > > > of the string into the proper parser. > > > > > > > > -- > > > > John Dennis > > > > > > > > > > Thanks for this response, John. I think serializing to JSON and prepending a format specifier makes sense. > > > > > > Colleen > > Thanks for the response Colleen, John. After reading the above mails, > what I understood is to pass list of group names specific to a > keystone in to a JSON schema prepending a tag. e.g - groups_names = > "JSON:{groups_names:[a,b,c]}". IN SAML assertion Attribute_name can be > openstack_groups and Attribute_value will be > "JSON:{groups_names:[a,b,c]}". Not quite. The important part is that the group name needs to be tied to a domain name. The attribute statement generator can already produce a list of attributes of one type by just adding new attribute values to the attribute, see for example openstack_roles[1]. The attribute value then needs to be a single group + domain JSON blob. As an example, if a user is a member of group A in domain X and also a member of group B in domain Y, the attribute could look like this: JSON:{"group_name": "A", "domain_name": "X"} JSON:{"group_name": "B", "domain_name": "Y"} [1] http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n177 Colleen From aj at suse.com Thu Sep 27 08:18:01 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 27 Sep 2018 10:18:01 +0200 Subject: [openstack-dev] [docs][charms] Updating Deployment Guides indices of published pages In-Reply-To: References: Message-ID: <90fddc55-c1ad-b19a-fa55-761399a30d70@suse.com> On 27/09/2018 10.04, Frode Nordahl wrote: > Hello docs team, > > What would it take to re-generate the indices for Deployment Guides on > the published Queens [0] and Rocky [1] docs pages? In a nutshell: Patience ;) All looks fine, details below: They are regenerated as part of each merge to openstack-manuals. Looking at [0] it was last regenerated on the 11th, might be the post job was never run for the changes you reference in [2]. > It seems that the charms project has missed the index for those releases > due to some issues which now has been resolved [2].  We would love to > reclaim our space in the list! Right now we have a HUGE backlog of post jobs (70+ hours) due to high load of CI as mentioned by Clark earlier this month. After the next merge and post job run, those should be up. So, please check again once the backlog is worked through and a next change merged. If the content is then still not there, we need to investigate the post jobs and why they failed. Andreas > 0: https://docs.openstack.org/queens/deploy/ > 1: https://docs.openstack.org/rocky/deploy/ > 2: > https://review.openstack.org/#/q/topic:enable-openstack-manuals-rocky-latest+(status:open+OR+status:merged) -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From gmann at ghanshyammann.com Thu Sep 27 08:23:41 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 27 Sep 2018 17:23:41 +0900 Subject: [openstack-dev] [PTG][QA] QA PTG Stein Summary Message-ID: <1661a206217.11d35f3c014622.517615697715198997@ghanshyammann.com> Hi All, Thanks for joining Stein QA PTG and making it successful. I am writing the QA PTG summary and detailed discussion can be found on main PTG etherpad - https://etherpad.openstack.org/p/qa-stein-ptg We are continuing the 'owner' for each working item so that we have single point of contact to track those. 1. QA Help Room ------------------------------------------------------- QA team was present in Help Room on Monday. We were happy to help few queries from Octavia multinode job and Kuryr-kubernetes testing part. Other than that, there was not much that day except few other random queries. 2. Rocky Retrospective --------------------------------------------------------- We discussed the Rocky Retrospective as first thing on Tuesday. We went through 1. what went well and 2. what needs to improve and gather some concrete action items. Patrole has good progress in Rocky cycle with code as well as documentation. Also we were able to fill the compute microversion gap almost all till Rocky. Action Items: - Need to add Tempest CLI documentation and other usefull stuff from tripleo Doc to Tempest Doc - chandankumar - Run all tests in tempest-full-parallel job and move it to periodic job pipeline - afazekas - Need to merge the QA office hour, check with andrea for 17 UTC office hour and if ok then, close that and modify the current office hour from 9 UTC to 8 UTC . - gmann - Need to ask chandankumar or manik for bug triage volunteer. - gmann - Create the low hanging items list and publish for new contributors - gmann We will be tracking the above AI in our QA office hour to finish them on time. Owner: gmann Etherpad link: https://etherpad.openstack.org/p/qa-rocky-retrospective 3. Stable interfaces from Tempest Plugins ----------------------------------------------------------- We discussed about having stable interface from Tempest plugins like Tempest so that other plugins can consume those. Service client is good example of those which are required to do cross project testing. For example: congress tempest plugin needs to use mistral service clients for integration testing of congress+mistral [1]. Similarly Patrole need to use neutron tempest plugin service client(for n/n-1/n-2). Idea here is to have lib or stable interface in Tempest plugins side like Tempest so that other plugins can use them. We will start with some documentation about use case and benefits and then work with neutron-tempest-plugin team to make their service client expose as stable interface. Once that is done then, we can suggest the same to other plugins. Action Items: - Need some documentation and guidance with use case and example, benefits for plugins. - felipemonteiro - mailing list discussions on making specific plugins stable that are consumed by other plugins - felipemonteiro - check with requirement team to add the tempest plugin in g-r and then those can be added on other plugins requirement.txt - gmann Owner: felipemonteiro Etherpad link: https://etherpad.openstack.org/p/stable-interfaces-from-tempest-plugins 4. Tempest Plugins CI to cover stable branches & Plugins release and tagging clarification -------------------------------------------------------------------------------------------------------------- We discussed about how other projects or Plugins can setup the CI to cover the stable branches testing on their master changes. Solution can be simple to define the supported stable branches and run them on master gate (same way Tempest does). QA team will start the guidelines on this. Other part we need to cover is release and tagging guidelines. There were lot of confusion about release of Tempest plugins in Rocky. To make it better, QA team will write guidelines and document the clear process. Action Items: - move/update documentation on branchless considerations in tempest to somewhere more global so that it covers plugins documentation too - gmann - Add tagging and release clarification for plugins. - talk with neutron team about moving in-tree tempest plugins of stadium projects to neutron-tempest-plugin or separate tempest-plugins repositories - slaweq - Add config options to disable the plugins load - gmann Owner: gmann Etherpad link: https://etherpad.openstack.org/p/tempest-plugins-ci-release-tagging-clarification 5. Tempest Cleanup Feature --------------------------------- Current Tempest CLI for cleanup the test resource is not so good. It does cleanup the resources based on saved_state.json file which save the resources difference before and after Tempest run. This can end up cleaning up the other non-test resources which got created during time period of tempest run. There is a QA spec which proposing the different approach for cleanup[2]. After discussing all those approach, we decided to go with resource_prefix. We will bring back the resource_prefix approach (which got removed after deprecation) and modify the "tempest cleanup" cli to cleanup resource based on resource_prefix. Complete discussion can be found in etherpad. As of felipemonteiro is the owner but he will check with nicholashelgeson or AT&T folks to further work on this. Action Item: - Update spec with idea from 0.0.2 (because it's relatively easy to implement) - get merged - felipemonteiro/nicholashelgeson - Add back resource_prefix config option and add back to data_utils.rand_name - felipemonteiro/nicholashelgeson - Go through all tempest tests and make sure data_utils.rand_name is used for resources - felipemonteiro/nicholashelgeson - Update tempest cleanup - felipemonteiro/nicholashelgeson - Update documentation - felipemonteiro/nicholashelgeson Owner: felipemonteiro Etherpad links: https://etherpad.openstack.org/p/tempest-cleanup-feature 6. Tempest conf Plugin Discovery process ------------------------------------------------------------------------- This topic is about generating the Tempest conf from plugins config options too. This idea is more for python-tempestconf not for QA as such. But there were python-tempestconf folks present in QA room and discussed that this is doable in python-tempestconf itself. There is nothing from QA side on this so I would like to drop this item from QA tracking. Etherpad link: https://etherpad.openstack.org/p/tempest-conf-plugin-discovery-process 7. Proper handling of interface/volume/pci devece attach/detach hotplug/unplug in tempest ------------------------------------------------------------------------- Tempest tests not handling the hotplug/unplug events properly. The guest does not check for button press event at early boot time. The hotplug events sent before the kernel fully initialized can be lost. test_stamp_pattern.py could be unskipped, if we would try to ssh vm before hot plug event (volume attach). Also there are API tests which knows nothing about the guest state, therefore it cannot know when the guest is ready for checking the button press. Details problem can be found here [3]. Idea is to perform the ssh validation step before we consider the test server ready to use in test. Action Items: - Adding ssh validation steps for api/scenrio tests where is required - afazekas - making the run_validation default true - afazekas - soft reboot , is nova event tells was it soft reboot (check) , is some special register on the machine can tell it ? - afazekas Owner: afazekas Etherpad link: https://etherpad.openstack.org/p/handling-of-interface-attach-detach-hotplug-unplug 8. Shared network handling ----------------------------------------------------------------------------- Attila observed few tests failing when using shard network. But looks like the only 100% reproducible issue is test_create_list_show_delete_interfaces_by_fixed_ip[4][5] There should not be issue for shared network also and as of now we will just fix the failing tests. Action Items: - Fix the failing tests - afazekas - Try to run tempest in parallel with shared network create/delete parallel testS to search for other incidents locally) - afazekas Owner: afazekas Etherpad link: https://etherpad.openstack.org/p/shared-network-handling 9. Planning for Patrole Stable release ---------------------------------------------------------- We continue the Patrole stable release discussion in PTG. We prepared the concrete list of items we need to release it stable and targeted for Stein cycle. Along with multi-policy, system scope support, we will check the framework stability also. Documentation is already in better shape. TODO items before stable release: 1. multi-policy 2. system scope support: 3. Better support for generic check policy rules (e.g. user_id:%(user.id)) 4. Iterate through all the patrole interface/framework which needs to be used outside of patrole Action Items: - let's finish the above planned items in Stein. - felipemonteiro Owner: felipemonteiro Etherpad link: https://etherpad.openstack.org/p/planning-for-patrole-stable-release 10. Proposal for New QA project: Harbinger ----------------------------------------------------------------- OpenStack QA currently lacks a dataplane testing framework that can be easily consumed, configured and maintained. To fill that gap, there is a new project proposal called "Harbinger". Harbinger allows execution of various OpenStack data plane testing frameworks while maintaining a single standardized interface and input format. Currently it cover Shaker and Yardstick and Kloudbuster is WIP. This can be useful to consume in Eris (extreme testing idea). There are few points which need more clarification like Standardization of output, can it cover Control plane testing etc. IMO, this is good project to start and can be consumed in Eris and cross community testing. Author or this project was not present in PTG and felipemonteiro proxy this too. We would like to extend this discussion on ML and with Extreme testing stackholders and also start the QA spec. Action Items: - There are many item we planned as AI but first step will to start the ML and spec. Owner: felipemonteiro told to convey the discussion with Harbinger author. Leaving the owner empty as of now. Etherpad links: https://etherpad.openstack.org/p/new-qa-project-harbinger 11. Clean up the tempest documentation ----------------------------------------------------------------- This is always outstanding item :-). We discussed about more improvement in documentation like better doc structure, CLI doc, consuming the Tempest related docs at central place which is Tempest. We had list of item to cover with different assignee. Action Items: - Complete all the documentation points written in etherpad. - tosky, gmann, masayukig Owner: gmann Etherpad links: https://etherpad.openstack.org/p/clean-up-the-tempest-documentation 12. Consuming all Tempest CLI in gate ----------------------------------------------------------------- Tempest has many CLI anddue to lack of unite tests[6], there are chance where we can/did break the CLI. Idea is to consume all the CLI on gate job so that we can improve their testing coverage. Few CLI will be covered in main gate job and other as functional testing. Action Items: - Continue this patch on zuul v3 - https://review.openstack.org/#/c/355666/ - masayukig - add funcional tests and new functionl job - gmann Owner: masayukig Etherpad links: https://etherpad.openstack.org/p/consuming-all-tempest-cli-in-gate 13. Migration from Launchpad to storyboard ----------------------------------------------------------------- We discussed about possibility of migration to storyboard. Patrole is first QA projects we are trying and based on that we can proceed on other projects. Action Items: - Wait for BP script from storyboard team and then finish the Patrole projects migration. - gmann - Feedback or request for Storyboard: - Create a story for adding some filed to indicative user interest (heat / vote / points ) - afazekas - Gerrit automation work to story about automatic assignee etc or adding the Topic field on gerrit. - Have a way to not diverge too much the set of used tags - possible solution: sort proposed tags by popularity Owner: gmann Etherpad links: https://etherpad.openstack.org/p/migration-from-lp-to-sb 14. QA Stein Priority: -------------------------------------------------------------------- We discussed about the priority items for Stein and listed the items and owner of each items in below etherpads. We will be tracking each item in office hour and try to keep the good progress. Etherpad link: https://etherpad.openstack.org/p/qa-stein-priority Let's work closely as team and have another successful cycle. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128225.html [2] https://review.openstack.org/#/c/595277 [3] https://docs.google.com/presentation/d/1Im-iYVzroKwXKP23p12Q5vsUGdk2V26SPpLWF3I5dbA/edit?usp=sharing [4] http://logs.openstack.org/33/601433/2/check/tempest-full/b4ea6bf/testr_results.html.gz [5] https://bugs.launchpad.net/tempest/+bug/1790864 [6] https://blueprints.launchpad.net/tempest/+spec/tempest-cli-unit-test-coverage -gmann From paul.bourke at oracle.com Thu Sep 27 08:50:25 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Thu, 27 Sep 2018 09:50:25 +0100 Subject: [openstack-dev] [kolla] Proposing Chason Chan (chason) as kolla-ansible core In-Reply-To: References: Message-ID: <898bc11e-0638-3970-06f2-1208f11e240d@oracle.com> +1 On 25/09/18 16:47, Eduardo Gonzalez wrote: > Hi, > > I would like to propose Chason Chan to the kolla-ansible core team. > > Chason is been working on addition of Vitrage roles, rework VpnaaS > service, maintaining > documentation as well as fixing many bugs. > > Voting will be open for 14 days (until 9th of Oct). > > Kolla-ansible cores, please leave a vote. > Consider this mail my +1 vote > > Regards, > Eduardo > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thierry at openstack.org Thu Sep 27 09:17:13 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 27 Sep 2018 11:17:13 +0200 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> <3138793d-f86e-2ea2-0b0d-959bcd6b88af@openstack.org> Message-ID: Ben Nemec wrote: > On 9/25/18 3:29 AM, Thierry Carrez wrote: >> Absence of priorities was an initial design choice[1] based on the >> fact that in an open collaboration every group, team, organization has >> their own views on what the priority of a story is, so worklist and >> tags are better ways to capture that. Also they don't really work >> unless you triage everything. And then nobody really looks at them to >> prioritize their work, so they are high cost for little benefit. > > So was the storyboard implementation based on the rant section then? > Because I don't know that I agree with/understand some of the assertions > there. > > First, don't we _need_ to triage everything? At least on some minimal > level? Not looking at new bugs at all seems like the way you end up with > a security bug open for two years *ahem*. Not that I would know anything > about that (it's been fixed now, FTR). StoryBoard's initial design is definitely tainted by an environment that has changed since. Back in 2014, most teams did not triage every bug, and were basically using Launchpad as a task tracker (you created the bugs that you worked on) rather than a bug tracker (you triage incoming requests and prioritize them). Storyboard is therefore designed primarily a task tracker (a way to organize work within teams), so it's not great as an issue tracker (a way for users to report issues). The tension between the two concepts was explored in [1], with the key argument that trying to do both at the same time is bound to create frustration one way or another. In PM literature you will even find suggestions that the only way to solve the diverging requirements is to use two completely different tools (with ways to convert a reported issue into a development story). But that "solution" works a lot better in environments where the users and the developers are completely separated (proprietary software). [1] https://wiki.openstack.org/wiki/StoryBoard/Vision > [...] > Also, like it or not there is technical debt we're carrying over here. > All of our bug triage up to this point has been based on launchpad > priorities, and as I think I noted elsewhere it would be a big step > backward to completely throw that out. Whatever model for prioritization > and triage that we choose, I feel like there needs to be a reasonable > migration path for the thousands of existing triaged lp bugs in OpenStack. I agree, which is why I'm saying that the "right" answer might not be the "best" answer. -- Thierry Carrez (ttx) From thierry at openstack.org Thu Sep 27 09:30:28 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 27 Sep 2018 11:30:28 +0200 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> Message-ID: <3b40375e-8970-35fc-5941-b331d5ecaf63@openstack.org> First I think that is a great goal, but I want to pick up on Dean's comment: Dean Troyer wrote: > [...] > The OSC core team is very thin, yes, it seems as though companies > don't like to spend money on client-facing things...I'll be in the > hall following this thread should anyone want to talk... I think OSC (and client-facing tooling in general) is a great place for OpenStack users (deployers of OpenStack clouds) to contribute. It's a smaller territory, it's less time-consuming than the service side, they are the most obvious interested party, and a small, 20% time investment would have a dramatic impact. It's arguably difficult for OpenStack users to get involved in "OpenStack development": keeping track of what's happening in a large team is already likely to consume most of the time you can dedicate to it. But OSC is a specific, smaller area which would be a good match for the expertise and time availability of anybody running an OpenStack cloud that wants to contribute back and make OpenStack better. Shameless plug: I proposed a Forum session in Berlin to discuss "Getting OpenStack users involved in the project" -- and we'll discuss such areas that are a particularly good match for users to get involved. -- Thierry Carrez (ttx) From cdent+os at anticdent.org Thu Sep 27 09:47:48 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 27 Sep 2018 10:47:48 +0100 (BST) Subject: [openstack-dev] [placement] Tetsuro Nakamura now core Message-ID: Since there were no objections and a week has passed, I've made Tetsuro a member of placement-core. Thanks for your willingness and continued help. Use your powers wisely. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sbauza at redhat.com Thu Sep 27 10:23:43 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 27 Sep 2018 12:23:43 +0200 Subject: [openstack-dev] [nova] Stein PTG summary In-Reply-To: References: <03b8b23f-31b6-e5fa-675d-5a40fbab58b5@gmail.com> Message-ID: On Thu, Sep 27, 2018 at 2:46 AM Matt Riedemann wrote: > On 9/26/2018 5:30 PM, Sylvain Bauza wrote: > > So, during this day, we also discussed about NUMA affinity and we said > > that we could possibly use nested resource providers for NUMA cells in > > Stein, but given we don't have yet a specific Placement API query, NUMA > > affinity should still be using the NUMATopologyFilter. > > That said, when looking about how to use this filter for vGPUs, it looks > > to me that I'd need to provide a new version for the NUMACell object and > > modify the virt.hardware module. Are we also accepting this (given it's > > a temporary question), or should we need to wait for the Placement API > > support ? > > > > Folks, what are you thoughts ? > > I'm pretty sure we've said several times already that modeling NUMA in > Placement is not something for which we're holding up the extraction. > > It's not an extraction question. Just about knowing whether the Nova folks would accept us to modify some o.vo object and module just for a temporary time until Placement API has some new query parameter. Whether Placement is extracted or not isn't really the problem, it's more about the time it will take for this query parameter ("numbered request groups to be in the same subtree") to be implemented in the Placement API. The real problem we have with vGPUs is that if we don't have NUMA affinity, the performance would be around 10% less for vGPUs (if the pGPU isn't on the same NUMA cell than the pCPU). Not sure large operators would accept that :( -Sylvain -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wjstk16 at gmail.com Thu Sep 27 11:35:52 2018 From: wjstk16 at gmail.com (ddaasd) Date: Thu, 27 Sep 2018 20:35:52 +0900 Subject: [openstack-dev] [vitrage] I would like to know how to link vitrage and prometheus. Message-ID: I would like to know how to link vitrage and prometheus. Is there a way to receive alert information from vitrage that occurred in prometheus and Alert manager like zabbix-vitrage? I wonder ,if i can, receive prometheus's alert and place it on the entity graph in vitrage. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wjstk16 at gmail.com Thu Sep 27 11:38:18 2018 From: wjstk16 at gmail.com (ddaasd) Date: Thu, 27 Sep 2018 20:38:18 +0900 Subject: [openstack-dev] [vitrage] I would like to know how to link vitrage and prometheus. Message-ID: I would like to know how to link vitrage and prometheus. Is there a way to receive alert information from vitrage that occurred in prometheus and Alert manager like zabbix-vitrage? I wonder ,if i can, receive prometheus's alert and place it on the entity graph in vitrage. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Sep 27 12:37:21 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 27 Sep 2018 07:37:21 -0500 Subject: [openstack-dev] [nova] Stein PTG summary In-Reply-To: References: <03b8b23f-31b6-e5fa-675d-5a40fbab58b5@gmail.com> Message-ID: <543de612-b89e-54dc-8c2c-b6c0cf46b3c0@gmail.com> On 9/27/2018 5:23 AM, Sylvain Bauza wrote: > > > On Thu, Sep 27, 2018 at 2:46 AM Matt Riedemann > wrote: > > On 9/26/2018 5:30 PM, Sylvain Bauza wrote: > > So, during this day, we also discussed about NUMA affinity and we > said > > that we could possibly use nested resource providers for NUMA > cells in > > Stein, but given we don't have yet a specific Placement API > query, NUMA > > affinity should still be using the NUMATopologyFilter. > > That said, when looking about how to use this filter for vGPUs, > it looks > > to me that I'd need to provide a new version for the NUMACell > object and > > modify the virt.hardware module. Are we also accepting this > (given it's > > a temporary question), or should we need to wait for the > Placement API > > support ? > > > > Folks, what are you thoughts ? > > I'm pretty sure we've said several times already that modeling NUMA in > Placement is not something for which we're holding up the extraction. > > > It's not an extraction question. Just about knowing whether the Nova > folks would accept us to modify some o.vo object and module just for a > temporary time until Placement API has some new query parameter. > Whether Placement is extracted or not isn't really the problem, it's > more about the time it will take for this query parameter ("numbered > request groups to be in the same subtree") to be implemented in the > Placement API. > The real problem we have with vGPUs is that if we don't have NUMA > affinity, the performance would be around 10% less for vGPUs (if the > pGPU isn't on the same NUMA cell than the pCPU). Not sure large > operators would accept that :( > > -Sylvain I don't know how close we are to having whatever we need for modeling NUMA in the placement API, but I'll go out on a limb and assume we're not close. Given that, if we have to do something within nova for NUMA affinity for vGPUs for the NUMATopologyFilter, then I'd be OK with that since it's short term like you said (although our "short term" workarounds tend to last for many releases). Anyone that cares about NUMA today already has to enable the scheduler filter anyway. -- Thanks, Matt From ifatafekn at gmail.com Thu Sep 27 12:59:29 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Thu, 27 Sep 2018 15:59:29 +0300 Subject: [openstack-dev] [vitrage] I would like to know how to link vitrage and prometheus. In-Reply-To: References: Message-ID: Hi, You can use the Prometheus datasource in Vitrage starting from Rocky release. In order to use it, follow these steps: 1. Add 'prometheus' to 'types' configuration under /etc/vitrage/vitrage.conf 2. In alertmanager.yml add a receiver as follows: - name: ** webhook_configs: - url: ** # example: 'http://127.0.0.1:8999/v1/event' send_resolved: true http_config: basic_auth: username: ** password: ** Br, Ifat On Thu, Sep 27, 2018 at 2:38 PM ddaasd wrote: > I would like to know how to link vitrage and prometheus. > Is there a way to receive alert information from vitrage that occurred in > prometheus and Alert manager like zabbix-vitrage? > I wonder ,if i can, receive prometheus's alert and place it on the entity > graph in vitrage. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Thu Sep 27 13:16:38 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 27 Sep 2018 08:16:38 -0500 Subject: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs In-Reply-To: <1538035903.1137963.1522329592.7CA61F29@webmail.messagingengine.com> References: <1537790427.1265517.1518561608.5261E953@webmail.messagingengine.com> <48f4ddf1-3d93-340e-3ad2-11bc4ef004ef@redhat.com> <1537868010.2456380.1519770432.11C4E0C5@webmail.messagingengine.com> <16618f03cef.125129a738977.7142710598346158978@ghanshyammann.com> <1538035903.1137963.1522329592.7CA61F29@webmail.messagingengine.com> Message-ID: Using the domain name + group name pairing also allows for things like: JSON:{"group_name": "C", "domain_name": "X"} JSON:{"group_name": "C", "domain_name": "Y"} To showcase how we solve the ambiguity in group names by namespacing them with domains. On Thu, Sep 27, 2018 at 3:11 AM Colleen Murphy wrote: > > > On Thu, Sep 27, 2018, at 5:09 AM, vishakha agarwal wrote: > > > From : Colleen Murphy > > > To : > > > Date : Tue, 25 Sep 2018 18:33:30 +0900 > > > Subject : Re: [openstack-dev] [keystone] Domain-namespaced user > attributes in SAML assertions from Keystone IdPs > > > ============ Forwarded message ============ > > > > On Mon, Sep 24, 2018, at 8:40 PM, John Dennis wrote: > > > > > On 9/24/18 8:00 AM, Colleen Murphy wrote: > > > > > > This is in regard to https://launchpad.net/bugs/1641625 and > the proposed patch https://review.openstack.org/588211 for it. Thanks > Vishakha for getting the ball rolling. > > > > > > > > > > > > tl;dr: Keystone as an IdP should support sending > non-strings/lists-of-strings as user attribute values, specifically lists > of keystone groups, here's how that might happen. > > > > > > > > > > > > Problem statement: > > > > > > > > > > > > When keystone is set up as a service provider with an external > non-keystone identity provider, it is common to configure the mapping rules > to accept a list of group names from the IdP and map them to some property > of a local keystone user, usually also a keystone group name. When keystone > acts as the IdP, it's not currently possible to send a group name as a user > property in the assertion. There are a few problems: > > > > > > > > > > > > 1. We haven't added any openstack_groups key in the > creation of the SAML assertion ( > http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164 > ). > > > > > > 2. If we did, this would not be enough. Unlike other IdPs, > in keystone there can be multiple groups with the same name, namespaced by > domain. So it's not enough for the SAML AttributeStatement to contain a > semi-colon-separated list of group names, since a user could theoretically > be a member of two or more groups with the same name. > > > > > > * Why can't we just send group IDs, which are unique? > Because two different keystones are not going to have independent groups > with the same UUID, so we cannot possibly map an ID of a group from > keystone A to the ID of a different group in keystone B. We could map the > ID of the group in in A to the name of a group in B but then operators need > to create groups with UUIDs as names which is a little awkward for both the > operator and the user who now is a member of groups with nondescriptive > names. > > > > > > 3. If we then were able to encode a complex type like a > group dict in a SAML assertion, we'd have to deal with it on the service > provider side by being able to parse such an environment variable from the > Apache headers. > > > > > > 4. The current mapping rules engine uses basic python > string formatting to translate remote key-value pairs to local rules. We > would need to change the mapping API to work with values more complex than > strings and lists of strings. > > > > > > > > > > > > Possible solution: > > > > > > > > > > > > Vishakha's patch (https://review.openstack.org/588211) starts > to solve (1) but it doesn't go far enough to solve (2-4). What we talked > about at the PTG was: > > > > > > > > > > > > 2. Encode the group+domain as a string, for example by > using the dict string repr or a string representation of some custom XML > and maybe base64 encoding it. > > > > > > * It's not totally clear whether the AttributeValue > class of the pysaml2 library supports any data types outside of the > xmlns:xs namespace or whether nested XML is an option, so encoding the > whole thing as an xs:string seems like the simplest solution. > > > > > > 3. The SP will have to be aware that openstack_groups is a > special key that needs the encoding reversed. > > > > > > * I wrote down "MultiDict" in my notes but I don't > recall exactly what format the environment variable would take that would > make a MultiDict make sense here, in any case I think encoding the whole > thing as a string eliminates the need for this. > > > > > > 4. We didn't talk about the mapping API, but here's what I > think. If we were just talking about group names, the mapping API today > would work like this (slight oversimplification for brevity): > > > > > > > > > > > > Given a list of openstack_groups like ["A", "B", "C"], it would > work like this: > > > > > > > > > > > > [ > > > > > > { > > > > > > "local": > > > > > > [ > > > > > > { > > > > > > "group": > > > > > > { > > > > > > "name": "{0}", > > > > > > "domain": > > > > > > { > > > > > > "name": "federated_domain" > > > > > > } > > > > > > } > > > > > > } > > > > > > ], "remote": > > > > > > [ > > > > > > { > > > > > > "type": "openstack_groups" > > > > > > } > > > > > > ] > > > > > > } > > > > > > ] > > > > > > (paste in case the spacing makes this unreadable: > http://paste.openstack.org/show/730623/ ) > > > > > > > > > > > > But now, we no longer have a list of strings but something more > like [{"name": "A", "domain_name": "Default"} {"name": "B", "domain_name": > "Default", "name": "A", "domain_name": "domainB"}]. Since {0} isn't a > string, this example doesn't really work. Instead, let's assume that in > step (3) we converted the decoded AttributeValue text to an object. Then > the mapping could look more like this: > > > > > > > > > > > > [ > > > > > > { > > > > > > "local": > > > > > > [ > > > > > > { > > > > > > "group": > > > > > > { > > > > > > "name": "{0.name}", > > > > > > "domain": > > > > > > { > > > > > > "name": "{0.domain_name}" > > > > > > } > > > > > > } > > > > > > } > > > > > > ], "remote": > > > > > > [ > > > > > > { > > > > > > "type": "openstack_groups" > > > > > > } > > > > > > ] > > > > > > } > > > > > > ] > > > > > > (paste: http://paste.openstack.org/show/730622/ ) > > > > > > > > > > > > Alternatively, we could forget about the namespacing problem > and simply say we only pass group names in the assertion, and if you have > ambiguous group names you're on your own. We could also try to support > both, e.g. have an openstack_groups mean a list of group names for simpler > use cases, and openstack_groups_unique mean the list of encoded > group+domain strings for advanced use cases. > > > > > > > > > > > > Finally, whatever we decide for groups we should also apply to > openstack_roles which currently only supports global roles and not > domain-specific roles. > > > > > > > > > > > > (It's also worth noting, for clarity, that the samlize function > does handle namespaced projects, but this is because it's retrieving the > project from the token and therefore there is only ever one project and one > project domain so there is no ambiguity.) > > > > > > > > > > > > > > > > A few thoughts to help focus the discussion: > > > > > > > > > > * Namespacing is critical, no design should be permitted which > allows > > > > > for ambiguous names. Ambiguous names are a security issue and can > be > > > > > used by an attacker. The SAML designers recognized the importance > to > > > > > disambiguate names. In SAML names are conveyed inside a > NameIdentifier > > > > > element which (optionally) includes "name qualifier" attributes > which in > > > > > SAML lingo is a namespace name. > > > > > > > > > > * SAML does not define the format of an attribute value. You can > use > > > > > anything you want as long as it can be expressed in valid XML as > long as > > > > > the cooperating parties know how to interpret the XML content. But > > > > > herein lies the problem. Very few SAML implementations know how to > > > > > consume an attribute value other than a string. In the real world, > > > > > despite what the SAML spec says is permitted is the constraint > attribute > > > > > values is a string. > > > > > > > > > > * I haven't looked at the pysaml implementation but I'd be > surprised if > > > > > it treated attribute values as anything other than a string. In > theory > > > > > it could take any Python object (or JSON) and serialize it into > XML but > > > > > you would still be stuck with the receiver being unable to parse > the > > > > > attribute value (see above point). > > > > > > > > > > * You can encode complex data in an attribute value while only > using a > > > > > simple string. The only requirement is the relying party knowing > how to > > > > > interpret the string value. Note, this is distinctly different > than > > > > > using non-string attribute values because of who is responsible > for > > > > > parsing the value. If you use a non-string attribute value the > SAML > > > > > library need to know how to parse it, none or very few will know > how to > > > > > process that element. But if it's a string value the SAML library > will > > > > > happily pass that string back up to the application who can then > > > > > interpret it. The easiest way to embed complex data in a string > is with > > > > > JSON, we do it all the time, all over the place in OpenStack. > [1][2] > > > > > > > > > > So my suggestion would be to give the attribute a meaningful name. > > > > > Define a JSON schema for the data and then let the upper layers > decode > > > > > the JSON and operate on it. This is no different than any other > SAML > > > > > attribute passed as a string, the receive MUST know how to > interpret the > > > > > string value. > > > > > > > > > > [1] We already pass complex data in a SAML attribute string > value. We > > > > > permit a comma separated list of group names to appear in the > 'groups' > > > > > mapping rule (although I don't think this feature is documented > in our > > > > > mapping rules documentation). The receiver (our mapping engine) > has > > > > > hard-coded logic to look for a list of names. > > > > > > > > > > [2] We might want to prepend a format specifier to string > containing > > > > > complex data, e.g. "JSON:{json object}". Our parser could then > look for > > > > > a leading format tag and if if finds one strip it off and pass > the rest > > > > > of the string into the proper parser. > > > > > > > > > > -- > > > > > John Dennis > > > > > > > > > > > > > Thanks for this response, John. I think serializing to JSON and > prepending a format specifier makes sense. > > > > > > > > Colleen > > > > Thanks for the response Colleen, John. After reading the above mails, > > what I understood is to pass list of group names specific to a > > keystone in to a JSON schema prepending a tag. e.g - groups_names = > > "JSON:{groups_names:[a,b,c]}". IN SAML assertion Attribute_name can be > > openstack_groups and Attribute_value will be > > "JSON:{groups_names:[a,b,c]}". > > Not quite. The important part is that the group name needs to be tied to a > domain name. The attribute statement generator can already produce a list > of attributes of one type by just adding new attribute values to the > attribute, see for example openstack_roles[1]. The attribute value then > needs to be a single group + domain JSON blob. > > As an example, if a user is a member of group A in domain X and also a > member of group B in domain Y, the attribute could look like this: > > > JSON:{"group_name": "A", > "domain_name": "X"} > JSON:{"group_name": "B", > "domain_name": "Y"} > > > [1] > http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n177 > > Colleen > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Thu Sep 27 13:28:42 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 27 Sep 2018 08:28:42 -0500 Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: On Wed, Sep 26, 2018 at 1:56 PM Tim Bell wrote: > > Doug, > > Thanks for raising this. I'd like to highlight the goal "Finish moving > legacy python-*client CLIs to python-openstackclient" from the etherpad and > propose this for a T/U series goal. > > To give it some context and the motivation: > > At CERN, we have more than 3000 users of the OpenStack cloud. We write an > extensive end user facing documentation which explains how to use the > OpenStack along with CERN specific features (such as workflows for > requesting projects/quotas/etc.). > > One regular problem we come across is that the end user experience is > inconsistent. In some cases, we find projects which are not covered by the > unified OpenStack client (e.g. Manila). In other cases, there are subsets > of the function which require the native project client. > > I would strongly support a goal which targets > > - All new projects should have the end user facing functionality fully > exposed via the unified client > - Existing projects should aim to close the gap within 'N' cycles (N to be > defined) > - Many administrator actions would also benefit from integration (reader > roles are end users too so list and show need to be covered too) > - Users should be able to use a single openrc for all interactions with > the cloud (e.g. not switch between password for some CLIs and Kerberos for > OSC) > > Sorry to back up the conversation a bit, but does reader role require work in the clients? Last release we incorporated three roles by default during keystone's installation process [0]. Is the definition in the specification what you mean by reader role, or am I on a different page? [0] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html#default-roles > The end user perception of a solution will be greatly enhanced by a single > command line tool with consistent syntax and authentication framework. > > It may be a multi-release goal but it would really benefit the cloud > consumers and I feel that goals should include this audience also. > > Tim > > -----Original Message----- > From: Doug Hellmann > Reply-To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > Date: Wednesday, 26 September 2018 at 18:00 > To: openstack-dev , > openstack-operators , > openstack-sigs > Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for > T series > > It's time to start thinking about community-wide goals for the T > series. > > We use community-wide goals to achieve visible common changes, push for > basic levels of consistency and user experience, and efficiently > improve > certain areas where technical debt payments have become too high - > across all OpenStack projects. Community input is important to ensure > that the TC makes good decisions about the goals. We need to consider > the timing, cycle length, priority, and feasibility of the suggested > goals. > > If you are interested in proposing a goal, please make sure that before > the summit it is described in the tracking etherpad [1] and that you > have started a mailing list thread on the openstack-dev list about the > proposal so that everyone in the forum session [2] has an opportunity > to > consider the details. The forum session is only one step in the > selection process. See [3] for more details. > > Doug > > [1] https://etherpad.openstack.org/p/community-goals > [2] > https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 > [3] https://governance.openstack.org/tc/goals/index.html > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wjstk16 at gmail.com Thu Sep 27 13:45:09 2018 From: wjstk16 at gmail.com (ddaasd) Date: Thu, 27 Sep 2018 22:45:09 +0900 Subject: [openstack-dev] [vitrage] I would like to know how to link vitrage and prometheus. In-Reply-To: References: Message-ID: Hello Ifat, Thank you for your help! I really appreciate it. Thanks again! Best Regards, Won 2018년 9월 27일 (목) 오후 10:00, Ifat Afek 님이 작성: > Hi, > > You can use the Prometheus datasource in Vitrage starting from Rocky > release. > > In order to use it, follow these steps: > > 1. Add 'prometheus' to 'types' configuration under > /etc/vitrage/vitrage.conf > 2. In alertmanager.yml add a receiver as follows: > > - name: ** > > webhook_configs: > > - url: ** # example: 'http://127.0.0.1:8999/v1/event > ' > > send_resolved: true > > http_config: > > basic_auth: > > username: ** > > password: ** > > > > Br, > Ifat > > > On Thu, Sep 27, 2018 at 2:38 PM ddaasd wrote: > >> I would like to know how to link vitrage and prometheus. >> Is there a way to receive alert information from vitrage that occurred in >> prometheus and Alert manager like zabbix-vitrage? >> I wonder ,if i can, receive prometheus's alert and place it on the entity >> graph in vitrage. >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Sep 27 13:49:31 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 27 Sep 2018 08:49:31 -0500 Subject: [openstack-dev] [storyboard] Prioritization? In-Reply-To: References: <8cc009a1-eae7-ee8e-f920-60eaf5c803a6@nemebean.com> <3138793d-f86e-2ea2-0b0d-959bcd6b88af@openstack.org> Message-ID: <8ff9b7ed-4a24-38e2-bf51-b42b6b9244e2@nemebean.com> On 9/27/18 4:17 AM, Thierry Carrez wrote: > Ben Nemec wrote: >> On 9/25/18 3:29 AM, Thierry Carrez wrote: >>> Absence of priorities was an initial design choice[1] based on the >>> fact that in an open collaboration every group, team, organization >>> has their own views on what the priority of a story is, so worklist >>> and tags are better ways to capture that. Also they don't really work >>> unless you triage everything. And then nobody really looks at them to >>> prioritize their work, so they are high cost for little benefit. >> >> So was the storyboard implementation based on the rant section then? >> Because I don't know that I agree with/understand some of the >> assertions there. >> >> First, don't we _need_ to triage everything? At least on some minimal >> level? Not looking at new bugs at all seems like the way you end up >> with a security bug open for two years *ahem*. Not that I would know >> anything about that (it's been fixed now, FTR). > > StoryBoard's initial design is definitely tainted by an environment that > has changed since. Back in 2014, most teams did not triage every bug, > and were basically using Launchpad as a task tracker (you created the > bugs that you worked on) rather than a bug tracker (you triage incoming > requests and prioritize them). I'm not sure that has actually changed much. Stemming from this thread I had an offline discussion around bug management and the gist was that we don't actually spend much time looking at the bug list for something to work on. I tend to pick up a bug when I hit it in my own environments or if I'm doing bug triage and it's something I think I can fix quickly. I would like to think that others are more proactive, but I suspect that's wishful thinking. I had vague thoughts that I might actually start tackling bugs that way this cycle since I spent a lot of last cycle getting Oslo bugs triaged so I might be able to repurpose that time, but until it actually happens it's just hopes and dreams. :-) Unfortunately, even if bug triage is a "write once, read never" process I think we still need to do it to avoid the case I mentioned above where something important falls through the cracks. :-/ > > Storyboard is therefore designed primarily a task tracker (a way to > organize work within teams), so it's not great as an issue tracker (a > way for users to report issues). The tension between the two concepts > was explored in [1], with the key argument that trying to do both at the > same time is bound to create frustration one way or another. In PM > literature you will even find suggestions that the only way to solve the > diverging requirements is to use two completely different tools (with > ways to convert a reported issue into a development story). But that > "solution" works a lot better in environments where the users and the > developers are completely separated (proprietary software). > > [1] https://wiki.openstack.org/wiki/StoryBoard/Vision > >> [...] >> Also, like it or not there is technical debt we're carrying over here. >> All of our bug triage up to this point has been based on launchpad >> priorities, and as I think I noted elsewhere it would be a big step >> backward to completely throw that out. Whatever model for >> prioritization and triage that we choose, I feel like there needs to >> be a reasonable migration path for the thousands of existing triaged >> lp bugs in OpenStack. > > I agree, which is why I'm saying that the "right" answer might not be > the "best" answer. > Yeah, I was mostly just +1'ing your point here. :-) From doug at doughellmann.com Thu Sep 27 14:06:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 27 Sep 2018 10:06:06 -0400 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> Message-ID: Dean Troyer writes: > On Wed, Sep 26, 2018 at 3:44 PM, Matt Riedemann wrote: >> I started documenting the compute API gaps in OSC last release [1]. It's a >> big gap and needs a lot of work, even for existing CLIs (the cold/live >> migration CLIs in OSC are a mess, and you can't even boot from volume where >> nova creates the volume for you). That's also why I put something into the >> etherpad about the OSC core team even being able to handle an onslaught of >> changes for a goal like this. > > The OSC core team is very thin, yes, it seems as though companies > don't like to spend money on client-facing things...I'll be in the > hall following this thread should anyone want to talk... > > The migration commands are a mess, mostly because I got them wrong to > start with and we have only tried to patch it up, this is one area I > think we need to wipe clean and fix properly. Yay! Major version > release! I definitely think having details about the gaps would be a prerequisite for approving a goal, but I wonder if that's something 1 person could even do alone. Is this an area where a small team is needed? >> I thought the same, and we talked about this at the Austin summit, but OSC >> is inconsistent about this (you can live migrate a server but you can't >> evacuate it - there is no CLI for evacuation). It also came up at the Stein >> PTG with Dean in the nova room giving us some direction. [2] I believe the >> summary of that discussion was: > >> a) to deal with the core team sprawl, we could move the compute stuff out of >> python-openstackclient and into an osc-compute plugin (like the >> osc-placement plugin for the placement service); then we could create a new >> core team which would have python-openstackclient-core as a superset > > This is not my first choice but is not terrible either... We built cliff to be based on plugins to support this sort of work distribution, right? >> b) Dean suggested that we close the compute API gaps in the SDK first, but >> that could take a long time as well...but it sounded like we could use the >> SDK for things that existed in the SDK and use novaclient for things that >> didn't yet exist in the SDK > > Yup, this can be done in parallel. The unit of decision for use sdk > vs use XXXclient lib is per-API call. If the client lib can use an > SDK adapter/session it becomes even better. I think the priority for > what to address first should be guided by complete gaps in coverage > and the need for microversion-driven changes. > >> This might be a candidate for one of these multi-release goals that the TC >> started talking about at the Stein PTG. I could see something like this >> being a goal for Stein: >> >> "Each project owns its own osc- plugin for OSC CLIs" >> >> That deals with the core team and sprawl issue, especially with stevemar >> being gone and dtroyer being distracted by shiny x-men bird related things. >> That also seems relatively manageable for all projects to do in a single >> release. Having a single-release goal of "close all gaps across all service >> types" is going to be extremely tough for any older projects that had CLIs >> before OSC was created (nova/cinder/glance/keystone). For newer projects, >> like placement, it's not a problem because they never created any other CLI >> outside of OSC. Yeah, I agree this work is going to need to be split up. I'm still not sold on the idea of multi-cycle goals, personally. > I think the major difficulty here is simply how to migrate users from > today state to future state in a reasonable manner. If we could teach > OSC how to handle the same command being defined in multiple plugins > properly (hello entrypoints!) it could be much simpler as we could > start creating the new plugins and switch as the new command > implementations become available rather than having a hard cutover. > > Or maybe the definition of OSC v4 is as above and we just work at it > until complete and cut over at the end. Note that the current APIs > that are in-repo (Compute, Identity, Image, Network, Object, Volume) > are all implemented using the plugin structure, OSC v4 could start as > the breaking out of those without command changes (except new > migration commands!) and then the plugins all re-write and update at > their own tempo. Dang, did I just deconstruct my project? It sure sounds like it. Congratulations! I like the idea of moving the existing code into libraries, having python-openstackclient depend on them, and then asking project teams for more help with them. > One thing I don't like about that is we just replace N client libs > with N (or more) plugins now and the number of things a user must > install doesn't go down. I would like to hear from anyone who deals > with installing OSC if that is still a big deal or should I let go of > that worry? Don't package managers just deal with this? I can pip/yum/apt install something and get all of its dependencies, right? Doug From doug at doughellmann.com Thu Sep 27 14:10:02 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 27 Sep 2018 10:10:02 -0400 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <1a886e1e-07d7-5326-6a9f-3203367a20a8@inaugust.com> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> <1a886e1e-07d7-5326-6a9f-3203367a20a8@inaugust.com> Message-ID: Monty Taylor writes: > Main difference is making sure these new deconstructed plugin teams > understand the client support lifecycle - which is that we don't drop > support for old versions of services in OSC (or SDK). It's a shift from > the support lifecycle and POV of python-*client, but it's important and > we just need to all be on the same page. That sounds like a reason to keep the governance of the libraries under the client tool project. Doug From d.krol at samsung.com Thu Sep 27 14:48:28 2018 From: d.krol at samsung.com (Dariusz Krol) Date: Thu, 27 Sep 2018 16:48:28 +0200 Subject: [openstack-dev] [python3-first] support in stable branches References: Message-ID: <20180927144829eucas1p26786a6e62c869b8138066f8857dae267~YSSg9Kjph2467924679eucas1p2l@eucas1p2.samsung.com> Hello Champions :) I work on the Trove project and we are wondering if python3 should be supported in previous releases as well? Actually this question was asked by Alan Pevec from the stable branch maintainers list. I saw you added releases up to ocata to support python3 and there are already changes on gerrit waiting to be merged but after reading [1] I have my doubts about this. Could you elaborate why it is necessary to support previous releases ? Best, Dariusz Krol [1] https://docs.openstack.org/project-team-guide/stable-branches.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 13168 bytes Desc: not available URL: From doug at doughellmann.com Thu Sep 27 15:05:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 27 Sep 2018 11:05:34 -0400 Subject: [openstack-dev] [goals][python3] switching python package jobs Message-ID: I think we are ready to go ahead and switch all of the python packaging jobs to the new set defined in the publish-to-pypi-python3 template [1]. We still have some cleanup patches for projects that have not completed their zuul migration, but there are only a few and rebasing those will be easy enough. The template adds a new check job that runs when any files related to packaging are changed (readme, setup, etc.). Otherwise it switches from the python2-based PyPI job to use python3. I have the patch to switch all official projects ready in [2]. Doug [1] http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/project-templates.yaml#n218 [2] https://review.openstack.org/#/c/598323/ From dtroyer at gmail.com Thu Sep 27 15:11:28 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 27 Sep 2018 10:11:28 -0500 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> Message-ID: On Thu, Sep 27, 2018 at 9:06 AM, Doug Hellmann wrote: > I definitely think having details about the gaps would be a prerequisite > for approving a goal, but I wonder if that's something 1 person could > even do alone. Is this an area where a small team is needed? Maybe, but it does break down along project/API lines for the most part, only crossing in places like Matt mentioned where compute+volume interact in server create, etc. For the purposes of a goal, I think we need to be thinking more about structural things than specific command changes. Things like Monty mentioned elsewhere in the thread about getting all of the exiting client libs to correctly use an SDK adapter then behaviours converge and the details of command changes become project-specific. > We built cliff to be based on plugins to support this sort of work > distribution, right? We did, my concerns about splitting the OSC in-repo plugins out is frankly more around losing control of things like command structure and consistency, not about the code. Looking at the loss of consistency in plugins shows that is a hard thing to maintain across a distributed set of groups. >> One thing I don't like about that is we just replace N client libs >> with N (or more) plugins now and the number of things a user must >> install doesn't go down. I would like to hear from anyone who deals >> with installing OSC if that is still a big deal or should I let go of >> that worry? > > Don't package managers just deal with this? I can pip/yum/apt install > something and get all of its dependencies, right? For those using that, yes. The set of folks interacting with OpenStack from a Windows desktop is not as large but their experience is sometimes a painful one...although wheels were just becoming a thing when I last tried to bundle OSC into a py2exe-style thing so the pains of things like OpenSSL may be fewer now. dt -- Dean Troyer dtroyer at gmail.com From dtroyer at gmail.com Thu Sep 27 15:13:37 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 27 Sep 2018 10:13:37 -0500 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> <1a886e1e-07d7-5326-6a9f-3203367a20a8@inaugust.com> Message-ID: On Thu, Sep 27, 2018 at 9:10 AM, Doug Hellmann wrote: > Monty Taylor writes: >> Main difference is making sure these new deconstructed plugin teams >> understand the client support lifecycle - which is that we don't drop >> support for old versions of services in OSC (or SDK). It's a shift from >> the support lifecycle and POV of python-*client, but it's important and >> we just need to all be on the same page. > > That sounds like a reason to keep the governance of the libraries under > the client tool project. Hmmm... I think that may address a big chunk of my reservations about being able to maintain consistency and user experience in a fully split-OSC world. dt -- Dean Troyer dtroyer at gmail.com From openstack at fried.cc Thu Sep 27 15:15:11 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 27 Sep 2018 10:15:11 -0500 Subject: [openstack-dev] [nova] Stein PTG summary In-Reply-To: <543de612-b89e-54dc-8c2c-b6c0cf46b3c0@gmail.com> References: <03b8b23f-31b6-e5fa-675d-5a40fbab58b5@gmail.com> <543de612-b89e-54dc-8c2c-b6c0cf46b3c0@gmail.com> Message-ID: <76fe7317-df29-13bc-8dc9-73e45d93a450@fried.cc> On 09/27/2018 07:37 AM, Matt Riedemann wrote: > On 9/27/2018 5:23 AM, Sylvain Bauza wrote: >> >> >> On Thu, Sep 27, 2018 at 2:46 AM Matt Riedemann > > wrote: >> >>     On 9/26/2018 5:30 PM, Sylvain Bauza wrote: >>      > So, during this day, we also discussed about NUMA affinity and we >>     said >>      > that we could possibly use nested resource providers for NUMA >>     cells in >>      > Stein, but given we don't have yet a specific Placement API >>     query, NUMA >>      > affinity should still be using the NUMATopologyFilter. >>      > That said, when looking about how to use this filter for vGPUs, >>     it looks >>      > to me that I'd need to provide a new version for the NUMACell >>     object and >>      > modify the virt.hardware module. Are we also accepting this >>     (given it's >>      > a temporary question), or should we need to wait for the >>     Placement API >>      > support ? >>      > >>      > Folks, what are you thoughts ? >> >>     I'm pretty sure we've said several times already that modeling >> NUMA in >>     Placement is not something for which we're holding up the extraction. >> >> >> It's not an extraction question. Just about knowing whether the Nova >> folks would accept us to modify some o.vo object and module just for a >> temporary time until Placement API has some new query parameter. >> Whether Placement is extracted or not isn't really the problem, it's >> more about the time it will take for this query parameter ("numbered >> request groups to be in the same subtree") to be implemented in the >> Placement API. >> The real problem we have with vGPUs is that if we don't have NUMA >> affinity, the performance would be around 10% less for vGPUs (if the >> pGPU isn't on the same NUMA cell than the pCPU). Not sure large >> operators would accept that :( >> >> -Sylvain > > I don't know how close we are to having whatever we need for modeling > NUMA in the placement API, but I'll go out on a limb and assume we're > not close. True story. We've been talking about ways to do this since (at least) the Queens PTG, but haven't even landed on a decent design, let alone talked about getting it specced, prioritized, and implemented. Since full NRP support was going to be a prerequisite in any case, and our Stein plate is full, Train is the earliest we could reasonably expect to get the placement support going, let alone the nova side. So yeah... > Given that, if we have to do something within nova for NUMA > affinity for vGPUs for the NUMATopologyFilter, then I'd be OK with that > since it's short term like you said (although our "short term" > workarounds tend to last for many releases). Anyone that cares about > NUMA today already has to enable the scheduler filter anyway. > +1 to this ^ -efried From doug at doughellmann.com Thu Sep 27 15:36:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 27 Sep 2018 11:36:11 -0400 Subject: [openstack-dev] [python3-first] support in stable branches In-Reply-To: <20180927144829eucas1p26786a6e62c869b8138066f8857dae267~YSSg9Kjph2467924679eucas1p2l@eucas1p2.samsung.com> References: <20180927144829eucas1p26786a6e62c869b8138066f8857dae267~YSSg9Kjph2467924679eucas1p2l@eucas1p2.samsung.com> Message-ID: Dariusz Krol writes: > Hello Champions :) > > > I work on the Trove project and we are wondering if python3 should be > supported in previous releases as well? > > Actually this question was asked by Alan Pevec from the stable branch > maintainers list. > > I saw you added releases up to ocata to support python3 and there are > already changes on gerrit waiting to be merged but after reading [1] I > have my doubts about this. I'm not sure what you're referring to when you say "added releases up to ocata" here. Can you link to the patches that you have questions about? > Could you elaborate why it is necessary to support previous releases ? > > > Best, > > Dariusz Krol > > > [1] https://docs.openstack.org/project-team-guide/stable-branches.html > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Thu Sep 27 16:04:21 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 27 Sep 2018 11:04:21 -0500 Subject: [openstack-dev] [python3-first] support in stable branches In-Reply-To: References: <20180927144829eucas1p26786a6e62c869b8138066f8857dae267~YSSg9Kjph2467924679eucas1p2l@eucas1p2.samsung.com> Message-ID: On 9/27/18 10:36 AM, Doug Hellmann wrote: > Dariusz Krol writes: > >> Hello Champions :) >> >> >> I work on the Trove project and we are wondering if python3 should be >> supported in previous releases as well? >> >> Actually this question was asked by Alan Pevec from the stable branch >> maintainers list. >> >> I saw you added releases up to ocata to support python3 and there are >> already changes on gerrit waiting to be merged but after reading [1] I >> have my doubts about this. > > I'm not sure what you're referring to when you say "added releases up to > ocata" here. Can you link to the patches that you have questions about? Possibly the zuul migration patches for all the stable branches? If so, those don't change the status of python 3 support on the stable branches, they just split the zuul configuration to make it easier to add new python 3 jobs on master without affecting the stable branches. > >> Could you elaborate why it is necessary to support previous releases ? >> >> >> Best, >> >> Dariusz Krol >> >> >> [1] https://docs.openstack.org/project-team-guide/stable-branches.html >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From msm at redhat.com Thu Sep 27 16:48:19 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 27 Sep 2018 12:48:19 -0400 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, This week's meeting was mostly ceremonial, with the main topic of discussion being the office hours for the SIG. If you have not heard the news about the API-SIG, we are converting from a regular weekly meeting time to a set of scheduled office hours. This change was discussed over the course of meeting leading up to the PTG and was finalized last week. The reasoning behind this decision was summarized nicely by edleafe in the last newsletter: We, as a SIG, have recognized that we have moved into a new phase. With most of the API guidelines that we needed to write having been written, there is not "new stuff" to make demands on our time. In recognition of this, we are changing how we will work. How can you find the API-SIG? We have 2 office hours that we will hold in the #openstack-sdks channel on freenode: * Thursdays 0900-1000 UTC * Thursdays 1600-1700 UTC Additionally, there is usually someone from the API-SIG hanging out in the #openstack-sdks channel. Please feel free to ping dtanstur, edleafe, or elmiko as direct contacts. Although this marks the end of our weekly meetings, the API-SIG will continue to be active in the community and we would like to extend a hearty "huzzah!" to all the OpenStack contributors, operators, and users who have helped to create the guidelines and guidance that we all share. Huzzah! If you're interested in helping out, here are some things to get you started: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * None # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Add an api-design doc with design advice https://review.openstack.org/592003 * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * Version and service discovery series Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-sig,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://storyboard.openstack.org/#!/project/1039 [6] https://git.openstack.org/cgit/openstack/api-sig Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From chris at openstack.org Thu Sep 27 16:54:30 2018 From: chris at openstack.org (Chris Hoge) Date: Thu, 27 Sep 2018 09:54:30 -0700 Subject: [openstack-dev] [k8s][tc] List of OpenStack and K8s Community Updates Message-ID: <7D4CAC9E-594B-45A6-9EEA-291E74CBFC8C@openstack.org> In the last year the SIG-K8s/SIG-OpenStack group has facilitated quite a bit of discussion between the OpenStack and Kubernetes communities. In doing this work we've delivered a number of presentations and held several working sessions. I've created an etherpad that contains links to these documents as a reference to the work and the progress we've made. I'll continue to keep the document updated, and if I've missed any links please feel free to add them. https://etherpad.openstack.org/p/k8s-openstack-updates -Chris From Tim.Bell at cern.ch Thu Sep 27 17:09:40 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Thu, 27 Sep 2018 17:09:40 +0000 Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: Lance, The comment regarding ‘readers’ is more to explain that the distinction between ‘admin’ and ‘user’ commands is gradually reducing, where OSC has been prioritising ‘user’ commands. As an example, we give the CERN security team view-only access to many parts of the cloud. This allows them to perform their investigations independently. Thus, many commands which would be, by default, admin only are also available to roles such as the ‘readers’ (e.g. list, show, … of internals or projects which they are not in the members list) I don’t think there is any implications for Keystone (and the readers role is a nice improvement to replace the previous manual policy definitions) but more of a question of which subcommands we should aim to support in OSC. The *-manage commands such as nova-manage, I would consider, out of scope for OSC. Only admins would be migrating between versions or DB schemas. Tim From: Lance Bragstad Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, 27 September 2018 at 15:30 To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series On Wed, Sep 26, 2018 at 1:56 PM Tim Bell > wrote: Doug, Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. To give it some context and the motivation: At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. I would strongly support a goal which targets - All new projects should have the end user facing functionality fully exposed via the unified client - Existing projects should aim to close the gap within 'N' cycles (N to be defined) - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) Sorry to back up the conversation a bit, but does reader role require work in the clients? Last release we incorporated three roles by default during keystone's installation process [0]. Is the definition in the specification what you mean by reader role, or am I on a different page? [0] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html#default-roles The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. Tim -----Original Message----- From: Doug Hellmann > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 26 September 2018 at 18:00 To: openstack-dev >, openstack-operators >, openstack-sigs > Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series It's time to start thinking about community-wide goals for the T series. We use community-wide goals to achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become too high - across all OpenStack projects. Community input is important to ensure that the TC makes good decisions about the goals. We need to consider the timing, cycle length, priority, and feasibility of the suggested goals. If you are interested in proposing a goal, please make sure that before the summit it is described in the tracking etherpad [1] and that you have started a mailing list thread on the openstack-dev list about the proposal so that everyone in the forum session [2] has an opportunity to consider the details. The forum session is only one step in the selection process. See [3] for more details. Doug [1] https://etherpad.openstack.org/p/community-goals [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 [3] https://governance.openstack.org/tc/goals/index.html __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus.hentsch at secustack.com Thu Sep 27 17:36:16 2018 From: markus.hentsch at secustack.com (Markus Hentsch) Date: Thu, 27 Sep 2018 19:36:16 +0200 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) Message-ID: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> Dear OpenStack developers, we would like to propose the introduction of an encrypted image format in OpenStack. We already created a basic implementation involving Nova, Cinder, OSC and Glance, which we'd like to contribute. We originally created a full spec document but since the official cross-project contribution workflow in OpenStack is a thing of the past, we have no single repository to upload it to. Thus, the Glance team advised us to post this on the mailing list [1]. Ironically, Glance is the least affected project since the image transformation processes affected are taking place elsewhere (Nova and Cinder mostly). Below you'll find the most important parts of our spec that describe our proposal - which our current implementation is based on. We'd love to hear your feedback on the topic and would like to encourage all affected projects to join the discussion. Subsequently, we'd like to receive further instructions on how we may contribute to all of the affected projects in the most effective and collaborative way possible. The Glance team suggested starting with a complete spec in the glance-specs repository, followed by individual specs/blueprints for the remaining projects [1]. Would that be alright for the other teams? [1] http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-09-27-14.00.log.html Best regards, Markus Hentsch (excerpts from our image encryption spec below) Problem description =================== An image, when uploaded to Glance or being created through Nova from an existing server (VM), may contain sensitive information. The already provided signature functionality only protects images against alteration. Images may be stored on several hosts over long periods of time. First and foremost this includes the image storage hosts of Glance itself. Furthermore it might also involve caches on systems like compute hosts. In conclusion they are exposed to a multitude of potential scenarios involving different hosts with different access patterns and attack surfaces. The OpenStack components involved in those scenarios do not protect the confidentiality of image data. That’s why we propose the introduction of an encrypted image format. Use Cases --------- * A user wants to upload an image, which includes sensitive information. To ensure the integrity of the image, a signature can be generated and used for verification. Additionally, the user wants to protect the confidentiality of the image data through encryption. The user generates or uploads a key in the key manager (e.g. Barbican) and uses it to encrypt the image locally using the OpenStack client (osc) when uploading it. Consequently, the image stored on the Glance host is encrypted. * A user wants to create an image from an existing server with ephemeral storage. This server may contain sensitive user data. The corresponding compute host then generates the image based on the data of the ephemeral storage disk. To protect the confidentiality of the data within the image, the user wants Nova to also encrypt the image using a key from the key manager, specified by its secret ID. Consequently, the image stored on the Glance host is encrypted. * A user wants to create a new server or volume based on an encrypted image created by any of the use cases described above. The corresponding compute or volume host has to be able to decrypt the image using the symmetric key stored in the key manager and transform it into the requested resource (server disk or volume). Although not required on a technical level, all of the use cases described above assume the usage of encrypted volume types and encrypted ephemeral storage as provided by OpenStack. Proposed changes ================ * Glance: Adding a container type for encrypted images that supports different mechanisms (format, cipher algorithms, secret ID) via a metadata property. Whether introducing several container types or outsourcing the mechanism definition into metadata properties may still be up for discussion, although we do favor the latter. * Nova: Adding support for decrypting an encrypted image when a servers ephemeral disk is created. This includes direct decryption streaming for encrypted disks. Nova should select a suitable mechanism according to the image container type and metadata. The symmetric key will be retrieved from the key manager (e.g. Barbican). * Cinder: Adding support for decrypting an encrypted image when a volume is created from it. Cinder should select a suitable mechanism according to the image container type and metadata. The symmetric key will be retrieved from the key manager (e.g. Barbican). * OpenStack Client / SDK: Adding support for encrypting images using a secret ID which references the symmetric key in the key manager (e.g. Barbican). This also involves new CLI arguments to specify the secret ID and encryption method. We propose to use an implementation of symmetric AES 256 encryption provided by GnuPG as a basic mechanism supported by this draft. It is a well established implementation of PGP and supports streamable encryption/decryption processes, which is important as illustrated below. We also explored the possibility of using more elaborated and dynamic approaches like PKCS#7 (CMS) but ultimately failed to find a free open-source implementation (e.g. OpenSSL) that supports streamable decryption of CMS-wrapped encrypted data. More precisely, no implementation we tested was able to decrypt a symmetrically encrypted, CMS-wrapped container without trying to completely load it into memory or suffering from other limitations regarding big files. We require the streamability of the encryption/decryption mechanism for two reasons: 1. Loading entire images into the memory of compute hosts or a users system is unacceptable. 2. We propose direct decryption-streaming into the target storage (e.g. encrypted volume) to prevent the creation of temporary unencrypted files. From lbragstad at gmail.com Thu Sep 27 17:40:02 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 27 Sep 2018 12:40:02 -0500 Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: Ack - thanks for the clarification, Tim. On Thu, Sep 27, 2018 at 12:10 PM Tim Bell wrote: > > > Lance, > > > > The comment regarding ‘readers’ is more to explain that the distinction > between ‘admin’ and ‘user’ commands is gradually reducing, where OSC has > been prioritising ‘user’ commands. > > > > As an example, we give the CERN security team view-only access to many > parts of the cloud. This allows them to perform their investigations > independently. Thus, many commands which would be, by default, admin only > are also available to roles such as the ‘readers’ (e.g. list, show, … of > internals or projects which they are not in the members list) > > > > I don’t think there is any implications for Keystone (and the readers role > is a nice improvement to replace the previous manual policy definitions) > but more of a question of which subcommands we should aim to support in OSC. > > > > The *-manage commands such as nova-manage, I would consider, out of scope > for OSC. Only admins would be migrating between versions or DB schemas. > > > > Tim > > > > *From: *Lance Bragstad > *Reply-To: *"OpenStack Development Mailing List (not for usage > questions)" > *Date: *Thursday, 27 September 2018 at 15:30 > *To: *"OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > *Subject: *Re: [openstack-dev] [goals][tc][ptl][uc] starting goal > selection for T series > > > > > > On Wed, Sep 26, 2018 at 1:56 PM Tim Bell wrote: > > > Doug, > > Thanks for raising this. I'd like to highlight the goal "Finish moving > legacy python-*client CLIs to python-openstackclient" from the etherpad and > propose this for a T/U series goal. > > To give it some context and the motivation: > > At CERN, we have more than 3000 users of the OpenStack cloud. We write an > extensive end user facing documentation which explains how to use the > OpenStack along with CERN specific features (such as workflows for > requesting projects/quotas/etc.). > > One regular problem we come across is that the end user experience is > inconsistent. In some cases, we find projects which are not covered by the > unified OpenStack client (e.g. Manila). In other cases, there are subsets > of the function which require the native project client. > > I would strongly support a goal which targets > > - All new projects should have the end user facing functionality fully > exposed via the unified client > - Existing projects should aim to close the gap within 'N' cycles (N to be > defined) > - Many administrator actions would also benefit from integration (reader > roles are end users too so list and show need to be covered too) > - Users should be able to use a single openrc for all interactions with > the cloud (e.g. not switch between password for some CLIs and Kerberos for > OSC) > > > > Sorry to back up the conversation a bit, but does reader role require work > in the clients? Last release we incorporated three roles by default during > keystone's installation process [0]. Is the definition in the specification > what you mean by reader role, or am I on a different page? > > > > [0] > http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html#default-roles > > > > The end user perception of a solution will be greatly enhanced by a single > command line tool with consistent syntax and authentication framework. > > It may be a multi-release goal but it would really benefit the cloud > consumers and I feel that goals should include this audience also. > > Tim > > -----Original Message----- > From: Doug Hellmann > Reply-To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > Date: Wednesday, 26 September 2018 at 18:00 > To: openstack-dev , > openstack-operators , > openstack-sigs > Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for > T series > > It's time to start thinking about community-wide goals for the T > series. > > We use community-wide goals to achieve visible common changes, push for > basic levels of consistency and user experience, and efficiently > improve > certain areas where technical debt payments have become too high - > across all OpenStack projects. Community input is important to ensure > that the TC makes good decisions about the goals. We need to consider > the timing, cycle length, priority, and feasibility of the suggested > goals. > > If you are interested in proposing a goal, please make sure that before > the summit it is described in the tracking etherpad [1] and that you > have started a mailing list thread on the openstack-dev list about the > proposal so that everyone in the forum session [2] has an opportunity > to > consider the details. The forum session is only one step in the > selection process. See [3] for more details. > > Doug > > [1] https://etherpad.openstack.org/p/community-goals > [2] > https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 > [3] https://governance.openstack.org/tc/goals/index.html > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Thu Sep 27 18:00:19 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Thu, 27 Sep 2018 11:00:19 -0700 Subject: [openstack-dev] [congress] 4AM UTC meeting today 9/28 Message-ID: Hi all, the Congress team meeting is transitioning to Fridays 4AM UTC on even weeks (starting 10/5). During this week's transition, we'll have a special transition meeting today Friday at 4AM UTC (instead of previous 2:30AM UTC) even though it's an odd week. Thank you! Eric Kao From zbitter at redhat.com Thu Sep 27 18:14:23 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 27 Sep 2018 14:14:23 -0400 Subject: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration In-Reply-To: <20180927022738.GA22304@rcp.sl.cloud9.ibm.com> References: <20180927022738.GA22304@rcp.sl.cloud9.ibm.com> Message-ID: On 26/09/18 10:27 PM, Qiming Teng wrote: > Hi, > > Due to many reasons, I cannot join you on this event, but I do like to > leave some comments here for references. > > On Tue, Sep 18, 2018 at 11:27:29AM +0800, Rico Lin wrote: >> *TL;DR* >> *How about a forum in Berlin for discussing autoscaling integration (as a >> long-term goal) in OpenStack?* > > First of all, there is nothing called "auto-scaling" in my mind and > "auto" is most of the time a scary word to users. It means the service > or tool is hiding some details from the users when it is doing something > without human intervention. There are cases where this can be useful, > there are also many other cases the service or tool is messing up things > to a state difficult to recover from. What matters most is the usage > scenarios we support. I don't think users care that much how project > teams are organized. Yeah, I mostly agree with you, and in fact I often use the term 'scaling group' to encompass all of the different types of groups in Heat. Our job is to provide an API that is legible to external tools to increase and decrease the size of the group. The 'auto' part is created by connecting it with other services, whether they be OpenStack services like Aodh or Monasca, monitoring services provided by the user themselves, or just manual invocation. (BTW people from the HA-clustering world have a _very_ negative reaction to Senlin's use of the term 'cluster'... there is no perfect terminology.) >> Hi all, as we start to discuss how can we join develop from Heat and Senlin >> as we originally planned when we decided to fork Senlin from Heat long time >> ago. >> >> IMO the biggest issues we got now are we got users using autoscaling in >> both services, appears there is a lot of duplicated effort, and some great >> enhancement didn't exist in another service. >> As a long-term goal (from the beginning), we should try to join development >> to sync functionality, and move users to use Senlin for autoscaling. So we >> should start to review this goal, or at least we should try to discuss how >> can we help users without break or enforce anything. > > The original plan, iirc, was to make sure Senlin resources are supported > in Heat, This happened. > and we will gradually fade out the existing 'AutoScalingGroup' > and related resource types in Heat. That's almost impossible to do without breaking existing users. > I have no clue since when Heat is > interested in "auto-scaling" again. It's something that Rico and I have been discussing - it turns out that Heat still has a *lot* of users running very important stuff on Heat scaling group code which, as you know, is burdened by a lot of technical debt. >> What will be great if we can build common library cross projects, and use >> that common library in both projects, make sure we have all improvement >> implemented in that library, finally to use Senlin from that from that >> library call in Heat autoscaling group. And in long-term, we gonna let all >> user use more general way instead of multiple ways but generate huge >> confusing for users. > > The so called "auto-scaling" is always a solution, built by > orchestrating many moving parts across the infrastructure. In some > cases, you may have to install agents into VMs for workload metering. Totally agree, but... > I > am not convinced this can be done using a library approach. Clearly there are _some_ parts that could in principle be shared. (I added some comments to the etherpad to clarify what I think Rico was referring to.) It seems to me that there's value in discussing it together rather than just working completely independently, even if the outcome of that discussion is that >> *As an action, I propose we have a forum in Berlin and sync up all effort >> from both teams to plan for idea scenario design. The forum submission [1] >> ended at 9/26.* >> Also would benefit from both teams to start to think about how they can >> modulize those functionalities for easier integration in the future. >> >> From some Heat PTG sessions, we keep bring out ideas on how can we improve >> current solutions for Autoscaling. We should start to talk about will it >> make sense if we combine all group resources into one, and inherit from it >> for other resources (ideally gonna deprecate rest resource types). Like we >> can do Batch create/delete in Resource Group, but not in ASG. We definitely >> got some unsynchronized works inner Heat, and cross Heat and Senlin. > > Totally agree with you on this. We should strive to minimize the > technologies users have to master when they have a need. +1 - to expand on Rico's example, we have at least 3 completely separate implementations of batching, each supporting different actions: Heat AutoscalingGroup: updates only Heat ResourceGroup: create or update Senlin Batch Policy: updates only and users are asking for batch delete as well. This is clearly an area where technical debt from duplicate implementations is making it hard to deliver value to users. cheers, Zane. From jaypipes at gmail.com Thu Sep 27 18:16:51 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 27 Sep 2018 14:16:51 -0400 Subject: [openstack-dev] [nova] Stein PTG summary In-Reply-To: <76fe7317-df29-13bc-8dc9-73e45d93a450@fried.cc> References: <03b8b23f-31b6-e5fa-675d-5a40fbab58b5@gmail.com> <543de612-b89e-54dc-8c2c-b6c0cf46b3c0@gmail.com> <76fe7317-df29-13bc-8dc9-73e45d93a450@fried.cc> Message-ID: On 09/27/2018 11:15 AM, Eric Fried wrote: > On 09/27/2018 07:37 AM, Matt Riedemann wrote: >> On 9/27/2018 5:23 AM, Sylvain Bauza wrote: >>> >>> >>> On Thu, Sep 27, 2018 at 2:46 AM Matt Riedemann >> > wrote: >>> >>>     On 9/26/2018 5:30 PM, Sylvain Bauza wrote: >>>      > So, during this day, we also discussed about NUMA affinity and we >>>     said >>>      > that we could possibly use nested resource providers for NUMA >>>     cells in >>>      > Stein, but given we don't have yet a specific Placement API >>>     query, NUMA >>>      > affinity should still be using the NUMATopologyFilter. >>>      > That said, when looking about how to use this filter for vGPUs, >>>     it looks >>>      > to me that I'd need to provide a new version for the NUMACell >>>     object and >>>      > modify the virt.hardware module. Are we also accepting this >>>     (given it's >>>      > a temporary question), or should we need to wait for the >>>     Placement API >>>      > support ? >>>      > >>>      > Folks, what are you thoughts ? >>> >>>     I'm pretty sure we've said several times already that modeling >>> NUMA in >>>     Placement is not something for which we're holding up the extraction. >>> >>> >>> It's not an extraction question. Just about knowing whether the Nova >>> folks would accept us to modify some o.vo object and module just for a >>> temporary time until Placement API has some new query parameter. >>> Whether Placement is extracted or not isn't really the problem, it's >>> more about the time it will take for this query parameter ("numbered >>> request groups to be in the same subtree") to be implemented in the >>> Placement API. >>> The real problem we have with vGPUs is that if we don't have NUMA >>> affinity, the performance would be around 10% less for vGPUs (if the >>> pGPU isn't on the same NUMA cell than the pCPU). Not sure large >>> operators would accept that :( >>> >>> -Sylvain >> >> I don't know how close we are to having whatever we need for modeling >> NUMA in the placement API, but I'll go out on a limb and assume we're >> not close. > > True story. We've been talking about ways to do this since (at least) > the Queens PTG, but haven't even landed on a decent design, let alone > talked about getting it specced, prioritized, and implemented. Since > full NRP support was going to be a prerequisite in any case, and our > Stein plate is full, Train is the earliest we could reasonably expect to > get the placement support going, let alone the nova side. So yeah... > >> Given that, if we have to do something within nova for NUMA >> affinity for vGPUs for the NUMATopologyFilter, then I'd be OK with that >> since it's short term like you said (although our "short term" >> workarounds tend to last for many releases). Anyone that cares about >> NUMA today already has to enable the scheduler filter anyway. >> > > +1 to this ^ Or, I don't know, maybe don't do anything and deal with the (maybe) 10% performance impact from the cross-NUMA main memory <-> CPU hit for post-processing of already parallel-processed GPU data. In other words, like I've mentioned in numerous specs and in person, I really don't think this is a major problem and is mostly something we're making a big deal about for no real reason. -jay From juliaashleykreger at gmail.com Thu Sep 27 18:29:56 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 27 Sep 2018 11:29:56 -0700 Subject: [openstack-dev] [ironic][neutron] SmartNics with Ironic Message-ID: Greetings everyone, Now that the PTG is over, I would like to go ahead and get the specification that was proposed to ironic-specs updated to represent the discussions that took place at the PTG. A few highlights from my recollection: * Ironic being the source of truth for the hardware configuration for the neutron agent to determine where to push configuration to. This would include the address and credential information (certificates right?). * The information required is somehow sent to neutron (possibly as part of the binding profile, which we could send at each time port actions are requested by Ironic.) * The Neutron agent running on the control plane connects outbound to the smartnic, using information supplied to perform the appropriate network configuration. * In Ironic, this would likely be a new network_interface driver module, with some additional methods that help facilitate the work-flow logic changes needed in each deploy_interface driver module. * Ironic would then be informed or gain awareness that the configuration has been completed and that the deployment can proceed. (A different spec has been proposed regarding this.) I have submitted a forum session based upon this and the agreed upon goal at the PTG was to have the ironic spec written up to describe the required changes. I guess the next question is, who wants to update the specification? -Julia From liliueecg at gmail.com Thu Sep 27 18:33:17 2018 From: liliueecg at gmail.com (Li Liu) Date: Thu, 27 Sep 2018 14:33:17 -0400 Subject: [openstack-dev] [Cyborg] Stein PTG summary Message-ID: I've written up a high-level summary of the discussions we had at the PTG -- please feel free to reply to this thread to fill in anything I've missed. Sorry about the delay, I was really tied up after the PTG We used our PTG etherpad: https://etherpad.openstack.org/p/cyborg-ptg-stein Cyborg Centric: 1. Stein Specs: a. os-acc e2e (in queue) -- probably needs to be refactored to move REST API signatures to Nova specs, and Cyborg-specific details in this spec b. DB evolution introducing VAR and device profile concept (need to add) Yes (part of the Rolling upgrade req) i. VAR related APIs spec: - GET /vars (optionally with request body of a list of VAR UUIDs) - GET /vars/instance/{instance_uuid} - GET, POST, PUT, DELETE /vars/unbound - GET, POST, PUT, DELETE /vars/bound ii. Device profiles spec c. device discovery (in queue) d. pci_white_list (low priority for S ?) low priority. 2. Drivers: a. Land current drivers in the queue: opae, gpu, clock b. add Xilinx driver c. add NPU driver (possibly from Huawei first, other NPU cards are welcomed as well) d. add RISC-V driver support 3. DOC: We will catch up with the documentation for Cyborg in the up coming cycle 4. Infra: a. fake driver to facilitate end-to-end function testing (part of the Rolling upgrade request)(check with shaohe) b. utilize storyboard for task mgmt and tracing Nova-Cyborg: Discussion with Nova details can be found https://etherpad.openstack.org/p/cyborg-nova-ptg-stein 1. Nova Stein Specs: a. device-profile e2e (for phase 1) b. new nova attach-device api (for phase 2) 2. Need Nova to complete: a. Nested Resource Provider: Keep your eyes on this series and its associated blueprint: https://review.openstack.org/#/q/topic:use-nested-allocation-candidates Note: Nova has made it clear that they do not expect Nova changes needed for Cyborg to be upstreamed in Stein, because the bar for integration is high. Cyborg needs to prove rolling upgrade etc., we need to pass CI/gates with Nova, Nova changes need to be tested at unit/functional/tempest levels. We have to make a push to get this done against expectations. Neutron-Cyborg: 1. Neutron Stein Specs: a. Propose a ML2 Plugin (networking-cyborg) b. neutron notification : add notification support in cyborg MISC: 1. work with SKT team on LOCI (OCI container image) support for OpenStack-Helm (after stein-1 or stein-2) Is SKT the SIG-K8 team?(they are one of the biggest Korean Telco Operators :) ) 2. work with SKT team and Dims on the k8s integration design/discussion -- Thank you Regards Li Liu -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Sep 27 18:33:29 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 27 Sep 2018 13:33:29 -0500 Subject: [openstack-dev] [cinder][puppet][kolla][helm][ansible] Change in Cinder backup driver naming Message-ID: <20180927183328.GA23767@sm-workstation> This probably applies to all deployment tools, so hopefully this reaches the right folks. In Havana, Cinder deprecated the use of specifying the module for configuring backup drivers. Patch https://review.openstack.org/#/c/595372/ finally removed the backwards compatibility handling for configs that still used the old way. Looking through a quick search, it appears there may be some tools that are still defaulting to setting the backup driver name using the patch. If your project does not specify the full driver class path, please update these to do so now. Any questions, please reach out here or in the #openstack-cinder channel. Thanks! Sean From e0ne at e0ne.info Thu Sep 27 18:34:02 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Thu, 27 Sep 2018 21:34:02 +0300 Subject: [openstack-dev] [horizon][plugins] npm jobs fail due to new XStatic-jQuery release (was: Horizon gates are broken) In-Reply-To: References: Message-ID: Hi, Unfortunately, this issue affects some of the plugins too :(. At least gates for the magnum-ui, senlin-dashboard, zaqar-ui and zun-ui are broken now. I'm working both with project teams to fix it asap. Let's wait if [5] helps for senlin-dashboard and fix all the rest of plugins. [5] https://review.openstack.org/#/c/605826/ Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Wed, Sep 26, 2018 at 4:50 PM Ivan Kolodyazhny wrote: > Hi all, > > Patch [1] is merged and our gates are un-blocked now. I went throw review > list and post 'recheck' where it was needed. > > We need to cherry-pick this fix to stable releases too. I'll do it asap > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > > On Mon, Sep 24, 2018 at 11:18 AM Ivan Kolodyazhny wrote: > >> Hi team, >> >> Unfortunately, horizon gates are broken now. We can't merge any patch due >> to the -1 from CI. >> I don't want to disable tests now, that's why I proposed a fix [1]. >> >> We'd got released some of XStatic-* packages last week. At least new >> XStatic-jQuery [2] breaks horizon [3]. I'm working on a new job for >> requirements repo [4] to prevent such issues in the future. >> >> Please, do not try 'recheck' until [1] will be merged. >> >> [1] https://review.openstack.org/#/c/604611/ >> [2] https://pypi.org/project/XStatic-jQuery/#history >> [3] https://bugs.launchpad.net/horizon/+bug/1794028 >> [4] https://review.openstack.org/#/c/604613/ >> >> Regards, >> Ivan Kolodyazhny, >> http://blog.e0ne.info/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Sep 27 19:12:52 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 27 Sep 2018 14:12:52 -0500 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> <1a886e1e-07d7-5326-6a9f-3203367a20a8@inaugust.com> Message-ID: <8eab21ad-c186-f5e3-714c-0d4152e07037@gmail.com> On 9/27/2018 10:13 AM, Dean Troyer wrote: > On Thu, Sep 27, 2018 at 9:10 AM, Doug Hellmann wrote: >> Monty Taylor writes: >>> Main difference is making sure these new deconstructed plugin teams >>> understand the client support lifecycle - which is that we don't drop >>> support for old versions of services in OSC (or SDK). It's a shift from >>> the support lifecycle and POV of python-*client, but it's important and >>> we just need to all be on the same page. >> That sounds like a reason to keep the governance of the libraries under >> the client tool project. > Hmmm... I think that may address a big chunk of my reservations about > being able to maintain consistency and user experience in a fully > split-OSC world. > > dt My biggest worry with splitting everything out into plugins with new core teams, even with python-openstackclient-core as a superset, is that those core teams will all start approving things that don't fit with the overall guidelines of how OSC commands should be written. I've had to go to the "Dean well" several times when reviewing osc-placement commands. But the python-openstackclient-core team probably isn't going to scale to fit the need of all of these gaps that need closing from the various teams, either. So how does that get fixed? I've told Dean and Steve before that if they want me to review / ack something compute-specific in OSC that they can call on me, like a liaison. Maybe that's all we need to start? Because I've definitely disagreed with compute CLI changes in OSC that have a +2 from the core team because of a lack of understanding from both the contributor and the reviewers about what the compute API actually does, or how a microversion behaves. Or maybe we just do some kind of subteam thing where OSC core doesn't look at a change until the subteam has +1ed it. We have a similar concept in nova with virt driver subteams. -- Thanks, Matt From alee at redhat.com Thu Sep 27 19:13:18 2018 From: alee at redhat.com (Ade Lee) Date: Thu, 27 Sep 2018 15:13:18 -0400 Subject: [openstack-dev] [oslo][castellan] Time for a 1.0 release? In-Reply-To: <8bab2939-ae16-31f3-8191-2cb1e81bc9df@nemebean.com> References: <8bab2939-ae16-31f3-8191-2cb1e81bc9df@nemebean.com> Message-ID: <1538075598.6608.136.camel@redhat.com> On Tue, 2018-09-25 at 16:30 -0500, Ben Nemec wrote: > Doug pointed out on a recent Oslo release review that castellan is > still > not officially 1.0. Given the age of the project and the fact that > we're > asking people to deploy a Castellan-compatible keystore as one of > the > base services, it's probably time to address that. > > To that end, I'm sending this to see if anyone is aware of any > reasons > we shouldn't go ahead and tag a 1.0 of Castellan. > + 1 > Thanks. > > -Ben > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mnaser at vexxhost.com Thu Sep 27 19:23:19 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 27 Sep 2018 15:23:19 -0400 Subject: [openstack-dev] [cinder][puppet][kolla][helm][ansible] Change in Cinder backup driver naming In-Reply-To: <20180927183328.GA23767@sm-workstation> References: <20180927183328.GA23767@sm-workstation> Message-ID: Thanks for the email Sean. https://review.openstack.org/605846 Fix Cinder backup to use full paths I think this should cover us, please let me know if we did things right. FYI: the docs all still seem to point at the old paths.. https://docs.openstack.org/cinder/latest/configuration/block-storage/backup-drivers.html On Thu, Sep 27, 2018 at 2:33 PM Sean McGinnis wrote: > > This probably applies to all deployment tools, so hopefully this reaches the > right folks. > > In Havana, Cinder deprecated the use of specifying the module for configuring > backup drivers. Patch https://review.openstack.org/#/c/595372/ finally removed > the backwards compatibility handling for configs that still used the old way. > > Looking through a quick search, it appears there may be some tools that are > still defaulting to setting the backup driver name using the patch. If your > project does not specify the full driver class path, please update these to do > so now. > > Any questions, please reach out here or in the #openstack-cinder channel. > > Thanks! > Sean > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From Kevin.Fox at pnnl.gov Thu Sep 27 19:33:09 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 27 Sep 2018 19:33:09 +0000 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <8eab21ad-c186-f5e3-714c-0d4152e07037@gmail.com> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> <1a886e1e-07d7-5326-6a9f-3203367a20a8@inaugust.com> , <8eab21ad-c186-f5e3-714c-0d4152e07037@gmail.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C1B010B@EX10MBOX03.pnnl.gov> If the project plugins were maintained by the OSC project still, maybe there would be incentive for the various other projects to join the OSC project, scaling things up? Thanks, Kevin ________________________________________ From: Matt Riedemann [mriedemos at gmail.com] Sent: Thursday, September 27, 2018 12:12 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series On 9/27/2018 10:13 AM, Dean Troyer wrote: > On Thu, Sep 27, 2018 at 9:10 AM, Doug Hellmann wrote: >> Monty Taylor writes: >>> Main difference is making sure these new deconstructed plugin teams >>> understand the client support lifecycle - which is that we don't drop >>> support for old versions of services in OSC (or SDK). It's a shift from >>> the support lifecycle and POV of python-*client, but it's important and >>> we just need to all be on the same page. >> That sounds like a reason to keep the governance of the libraries under >> the client tool project. > Hmmm... I think that may address a big chunk of my reservations about > being able to maintain consistency and user experience in a fully > split-OSC world. > > dt My biggest worry with splitting everything out into plugins with new core teams, even with python-openstackclient-core as a superset, is that those core teams will all start approving things that don't fit with the overall guidelines of how OSC commands should be written. I've had to go to the "Dean well" several times when reviewing osc-placement commands. But the python-openstackclient-core team probably isn't going to scale to fit the need of all of these gaps that need closing from the various teams, either. So how does that get fixed? I've told Dean and Steve before that if they want me to review / ack something compute-specific in OSC that they can call on me, like a liaison. Maybe that's all we need to start? Because I've definitely disagreed with compute CLI changes in OSC that have a +2 from the core team because of a lack of understanding from both the contributor and the reviewers about what the compute API actually does, or how a microversion behaves. Or maybe we just do some kind of subteam thing where OSC core doesn't look at a change until the subteam has +1ed it. We have a similar concept in nova with virt driver subteams. -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Thu Sep 27 19:35:48 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 27 Sep 2018 14:35:48 -0500 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C1B010B@EX10MBOX03.pnnl.gov> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> <1a886e1e-07d7-5326-6a9f-3203367a20a8@inaugust.com> <8eab21ad-c186-f5e3-714c-0d4152e07037@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1B010B@EX10MBOX03.pnnl.gov> Message-ID: <2219fd3e-6b9c-ccc0-99c7-522f4e2f8748@gmail.com> On 9/27/2018 2:33 PM, Fox, Kevin M wrote: > If the project plugins were maintained by the OSC project still, maybe there would be incentive for the various other projects to join the OSC project, scaling things up? Sure, I don't really care about governance. But I also don't really care about all of the non-compute API things in OSC either. -- Thanks, Matt From Kevin.Fox at pnnl.gov Thu Sep 27 19:57:05 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 27 Sep 2018 19:57:05 +0000 Subject: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <2219fd3e-6b9c-ccc0-99c7-522f4e2f8748@gmail.com> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> <1a886e1e-07d7-5326-6a9f-3203367a20a8@inaugust.com> <8eab21ad-c186-f5e3-714c-0d4152e07037@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1B010B@EX10MBOX03.pnnl.gov>, <2219fd3e-6b9c-ccc0-99c7-522f4e2f8748@gmail.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C1B014F@EX10MBOX03.pnnl.gov> Its the commons problem again. Either we encourage folks to contribute a little bit to the commons (review a few other peoples noncompute cli thingies. in doing so, you learn how to better do the cli in the generic/user friendly ways), to further their own project goals (get easier access to contribute to the cli of the compute stuff), or we do what we've always done. Let each project maintain their own cli and have no uniformity at all. Why are the walls in OpenStack so high? Kevin ________________________________________ From: Matt Riedemann [mriedemos at gmail.com] Sent: Thursday, September 27, 2018 12:35 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series On 9/27/2018 2:33 PM, Fox, Kevin M wrote: > If the project plugins were maintained by the OSC project still, maybe there would be incentive for the various other projects to join the OSC project, scaling things up? Sure, I don't really care about governance. But I also don't really care about all of the non-compute API things in OSC either. -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kennelson11 at gmail.com Thu Sep 27 20:04:28 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 27 Sep 2018 13:04:28 -0700 Subject: [openstack-dev] [StoryBoard] PTG Summary In-Reply-To: References: Message-ID: Updates! On Thu, Sep 20, 2018 at 2:15 PM Kendall Nelson wrote: > Hello Lovers of Task Tracking! > > So! We talked about a lot of things, and I went to a lot of rooms to talk > about StoryBoard related things and it was already a week ago so bear with > me. > > We had a lot of good discussions as we were able to include SotK in > discussions via videocalling. We also had the privilege of our outreachy > intern to come all the way from Cairo to Denver to join us :) > > Onto the summaries! > > Story Attachments > > ============== > > This topic has started coming up with increasing regularity. Currently, > StoryBoard doesn’t support attachments, but it’s a feature that several > projects claim to be blocking their migration. The current work around is > either to trim down logs and paste the relevant section, or to host the > file elsewhere and link to its location. After consulting with the > infrastructure team, we concluded that currently, there is no donated > storage. The next step is for me to draft a spec detailing our requirements > and implementation details and then to include infra on the review to help > them have something concrete to go to vendors with. For notes on the > proposed method see the etherpad[1]. > > One other thing discussed during this topic was how we could maybe migrate > the current attachments. This isn’t supported by the migration script at > this point, but it’s something we could write a separate script for. It > should be separate because it would be a painfully slow process and we > wouldn’t want to slow down the migration script more than it already is by > the Launchpad API. The attachments script would be run after the initial > migration; that being said, everything still persists in Launchpad so > things can still be referenced there. > > Handling Duplicate Stories > > ==================== > > This is also an ongoing topic for discussion. Duplicate stories if not > handled properly could dilute the database as we get more projects migrated > over. The plan we settled on is to add a ‘Mark as Duplicate’ button to the > webclient and corresponding functions to the API. The user would be > prompted for a link to the master story. The master story would get a new > timeline event that would have the link to the duplicate and the duplicate > story would have all tasks auto marked as invalid (aside from those marked > as merged) so that the story then shows as inactive. The duplicate story > could also get a timeline event that explains what happened and links to > the master story. I’ve yet to create a story for all of this, but it’s on > my todo list. > Turns out there is a story already[5]. > > Handling Thousands of Comments Per Story > > ================================== > > There’s this special flower story[2] that has literally thousands of > comments on it because of all of the gerrit comments being added to the > timeline for all the patches for all the tasks. Rendering of the timeline > portion of the webpage in the webclient is virtually impossible. It will > load the tasks and then hang forever. The discussion around this boiled > down to this: other task trackers also can’t handle this and there is a > better way to divvy up the story into several stories and contain them in a > worklist for future, similar work. For now, users can select what they want > to load in their timeline views for stories, so by unmarking all of the > timeline events in their preferences, the story will load completely sans > timeline details. Another solution we discussed to help alleviate the > timeline load on stories with lots of tasks is to have a task field that > links to the review, rather than a comment from gerrit every time a new > patch gets pushed. Essentially we want to focus on cleaning up the timeline > rather than just going straight to a pagination type of solution. It was > also concluded that we want to add another user preference for page sizes > of 1000. Tasks have not been created in the story related to this issue > yet[3], but its on my todo list. > Updates story[6]. > Project Group Descriptions > > ===================== > > There was a request to have project group descriptions, but currently > there is nothing in the API handling this. Discussion concluded with > agreement that this shouldn’t be too difficult. All that needs to happen is > a few additions to the API and the connection to managing group definitions > in project-config. I still need to make a story for this. > Created a story for this[7]. > Translating storyboard-webclient > > ========================= > > There was an infrastructure mailing list thread a little while back that > kicked off discussion on this topic. It was received as an interesting idea > and could help with the adoption of StoryBoard outside of OpenStack. The > biggest concern was communicating to users that are seeing the webclient > rendered in some other language that they still need to create > tasks/stories/worklists/boards in English or whatever the default language > is for the organization that is hosting StoryBoard. This could be a banner > when someone logs in, or something on user’s dashboards. One of the things > that needs to happen first is to find libraries for javascript and angular > for signaling what strings need to be translated. We didn’t really outline > next steps past that as it’s not super high priority, but it’s definitely > an effort we would support if someone wanted to start driving it forward. > I made a story[8] for this even though its pretty far down the list of priorities. > Easier Rollback for Webclient Continuous Deployment > > ========================================= > > With the puppet-storyboard module we deploy from tarballs instead of from > git right now, and we don't preserve earlier tarballs which makes it > difficult to rollback changes when we find issues. There wasn’t a ton of > discussion besides, yes we need to figure this out. Pre-zuulv3 we uploaded > tarballs with the git sha, if we apply that to > publish-openstack-javascript-content, that might help the situation. > Made a story for this[9]. > Managing Project Coresec Groups > > ========================== > > The vast majority of work on private stories has been implemented. Stories > can be marked as private and users can subscribe other users to those > private stories so that only those people can see them. The only > convenience that is currently lacking is adding groups of users (manually > or automatically if in a template story). Groups of users are currently > only managed by StoryBoard admins. We would like to make this managed in a > repository or by proxying gerrit group management. This shouldn’t be too > complicated a change, it would only require some sort of flag being set for > a group definition and then some database migration to sync those groups > into the StoryBoard database. If you have opinions on this topic, it’s not > all set in stone and we would love to hear your thoughts! > This is the story tracking private stories work[10]. > Searching > > ======== > > It’s become apparent that while the search and type ahead features of > StoryBoard work better than most users think at first glance, it’s an issue > that users struggle with searching as much as they do. We talked about > possible solutions for this aside from writing documentation to cover > searching in the webclient. The solution we talked about most was that it > might be easier for our users if we used the gerrit query language as that > is what the majority of our users are already familiar with. The next step > here is to write a spec for using the gerrit query language- or some other > language if users disagree about using the gerrit language. > > > Show all OpenStack Repos in StoryBoard? > > ================================ > > Are we getting to the point where it would be helpful for the users of > StoryBoard to be able to add tasks to stories for all the repos not already > migrated to StoryBoard? This would be incredibly helpful for things like > release goal tracking where many repos that haven’t been migrated had tasks > that were assigned to governance instead of the actual repo so as to be > able to track everything in a single story. This is something we will want > to take up with the TC during a set of office hours in the next week or so. > > > Summary & Continuing Conversations > > ============================= > > My brain is mush. Hopefully I covered the majority of the important topics > and did them justice! Anyone that was there, please feel free to correct > me. Anyone that wasn’t there that is interested in getting involved with > any of this, please join us in #storyboard on IRC or email us with the > [Storyboard] tag to the dev or infra mailing lists. We also have weekly > meetings[4] on Wednesdays at 19:00 UTC, please join us! > > > I've got a lot of stories to make/update and tasks to add.. > > > Thanks! > > -Kendall Nelson (diablo_rojo) > > [1] https://etherpad.openstack.org/p/sb-stein-ptg-planning > > [2] https://storyboard.openstack.org/#!/story/2002586 > > [3] https://storyboard.openstack.org/#!/story/2003525 > > [4] http://eavesdrop.openstack.org/#StoryBoard_Meeting > > I'm also trying to organize all of the work we are doing in a board. I'm in the middle of an overhaul atm, but if you want to follow our work, you will be able to see it here[11]..eventually. -Kendall (diablo_rojo) [5] https://storyboard.openstack.org/#!/story/2002136 [6] https://storyboard.openstack.org/#!/story/2003525 [7] https://storyboard.openstack.org/#!/story/2003886 [8] https://storyboard.openstack.org/#!/story/2003887 [9] https://storyboard.openstack.org/#!/story/2003888 [10] https://storyboard.openstack.org/#!/story/2000568 [11] https://storyboard.openstack.org/#!/board/1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Sep 27 22:23:26 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 27 Sep 2018 17:23:26 -0500 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> Message-ID: <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> On 9/27/2018 3:02 PM, Jay Pipes wrote: > A great example of this would be the proposed "deploy template" from > [2]. This is nothing more than abusing the placement traits API in order > to allow passthrough of instance configuration data from the nova flavor > extra spec directly into the nodes.instance_info field in the Ironic > database. It's a hack that is abusing the entire concept of the > placement traits concept, IMHO. > > We should have a way *in Nova* of allowing instance configuration > key/value information to be passed through to the virt driver's spawn() > method, much the same way we provide for user_data that gets exposed > after boot to the guest instance via configdrive or the metadata service > API. What this deploy template thing is is just a hack to get around the > fact that nova doesn't have a basic way of passing through some collated > instance configuration key/value information, which is a darn shame and > I'm really kind of annoyed with myself for not noticing this sooner. :( We talked about this in Dublin through right? We said a good thing to do would be to have some kind of template/profile/config/whatever stored off in glare where schema could be registered on that thing, and then you pass a handle (ID reference) to that to nova when creating the (baremetal) server, nova pulls it down from glare and hands it off to the virt driver. It's just that no one is doing that work. -- Thanks, Matt From melwittt at gmail.com Thu Sep 27 22:49:47 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 27 Sep 2018 15:49:47 -0700 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> Message-ID: On Thu, 27 Sep 2018 17:23:26 -0500, Matt Riedemann wrote: > On 9/27/2018 3:02 PM, Jay Pipes wrote: >> A great example of this would be the proposed "deploy template" from >> [2]. This is nothing more than abusing the placement traits API in order >> to allow passthrough of instance configuration data from the nova flavor >> extra spec directly into the nodes.instance_info field in the Ironic >> database. It's a hack that is abusing the entire concept of the >> placement traits concept, IMHO. >> >> We should have a way *in Nova* of allowing instance configuration >> key/value information to be passed through to the virt driver's spawn() >> method, much the same way we provide for user_data that gets exposed >> after boot to the guest instance via configdrive or the metadata service >> API. What this deploy template thing is is just a hack to get around the >> fact that nova doesn't have a basic way of passing through some collated >> instance configuration key/value information, which is a darn shame and >> I'm really kind of annoyed with myself for not noticing this sooner. :( > > We talked about this in Dublin through right? We said a good thing to do > would be to have some kind of template/profile/config/whatever stored > off in glare where schema could be registered on that thing, and then > you pass a handle (ID reference) to that to nova when creating the > (baremetal) server, nova pulls it down from glare and hands it off to > the virt driver. It's just that no one is doing that work. If I understood correctly, that discussion was around adding a way to pass a desired hardware configuration to nova when booting an ironic instance. And that it's something that isn't yet possible to do using the existing ComputeCapabilitiesFilter. Someone please correct me if I'm wrong there. That said, I still don't understand why we are talking about deprecating the ComputeCapabilitiesFilter if there's no supported way to replace it yet. If boolean traits are not enough to replace it, then we need to hold off on deprecating it, right? Would the template/profile/config/whatever in glare approach replace what the ComputeCapabilitiesFilter is doing or no? Sorry, I'm just not clearly understanding this yet. -melanie From duc.openstack at gmail.com Thu Sep 27 23:00:01 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Thu, 27 Sep 2018 16:00:01 -0700 Subject: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration In-Reply-To: References: <20180927022738.GA22304@rcp.sl.cloud9.ibm.com> Message-ID: On Thu, Sep 27, 2018 at 11:14 AM Zane Bitter wrote: > > > and we will gradually fade out the existing 'AutoScalingGroup' > > and related resource types in Heat. > > That's almost impossible to do without breaking existing users. One approach would be to switch the underlying Heat AutoScalingGroup implementation to use Senlin and then deprecate the AutoScalingGroup resource type in favor of the Senlin resource type over several cycles. Not saying that this is the definitive solution, but it is worth discussing as an option since this follows a path other projects have taken (e.g. nova-volume extraction into cinder). A prerequisite to this approach would probably require Heat to create the so-called common library to house the autoscaling code. Then Senlin would need to achieve feature parity against this autoscaling library before the switch could happen. > > Clearly there are _some_ parts that could in principle be shared. (I > added some comments to the etherpad to clarify what I think Rico was > referring to.) > > It seems to me that there's value in discussing it together rather than > just working completely independently, even if the outcome of that > discussion is that +1. The outcome of any discussion will be beneficial not only to the teams but also the operators and users. Regards, Duc (dtruong) From jaypipes at gmail.com Thu Sep 27 23:45:01 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 27 Sep 2018 19:45:01 -0400 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> Message-ID: On 09/27/2018 06:23 PM, Matt Riedemann wrote: > On 9/27/2018 3:02 PM, Jay Pipes wrote: >> A great example of this would be the proposed "deploy template" from >> [2]. This is nothing more than abusing the placement traits API in >> order to allow passthrough of instance configuration data from the >> nova flavor extra spec directly into the nodes.instance_info field in >> the Ironic database. It's a hack that is abusing the entire concept of >> the placement traits concept, IMHO. >> >> We should have a way *in Nova* of allowing instance configuration >> key/value information to be passed through to the virt driver's >> spawn() method, much the same way we provide for user_data that gets >> exposed after boot to the guest instance via configdrive or the >> metadata service API. What this deploy template thing is is just a >> hack to get around the fact that nova doesn't have a basic way of >> passing through some collated instance configuration key/value >> information, which is a darn shame and I'm really kind of annoyed with >> myself for not noticing this sooner. :( > > We talked about this in Dublin through right? We said a good thing to do > would be to have some kind of template/profile/config/whatever stored > off in glare where schema could be registered on that thing, and then > you pass a handle (ID reference) to that to nova when creating the > (baremetal) server, nova pulls it down from glare and hands it off to > the virt driver. It's just that no one is doing that work. No, nobody is doing that work. I will if need be if it means not hacking the placement API to serve this purpose (for which it wasn't intended). -jay From persia at shipstone.jp Thu Sep 27 23:56:53 2018 From: persia at shipstone.jp (Emmet Hikory) Date: Fri, 28 Sep 2018 08:56:53 +0900 Subject: [openstack-dev] [all][tc][elections] Stein TC Election Results Message-ID: <20180927235653.GA18250@shipstone.jp> Please join me in congratulating the 6 newly elected members of the Technical Committee (TC): - Doug Hellmann (dhellmann) - Julia Kreger (TheJulia) - Jeremy Stanley (fungi) - Jean-Philippe Evrard (evrardjp) - Lance Bragstad (lbragstad) - Ghanshyam Mann (gmann) Full Results: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f773fda2d0695864 Election process details and results are also available here: https://governance.openstack.org/election/ Thank you to all of the candidates, having a good group of candidates helps engage the community in our democratic process. Thank you to all who voted and who encouraged others to vote. Voter turnout was significantly up from recent cycles. We need to ensure your voices are heard. -- Emmet HIKORY From mnaser at vexxhost.com Fri Sep 28 00:00:42 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 27 Sep 2018 20:00:42 -0400 Subject: [openstack-dev] [all][tc][elections] Stein TC Election Results In-Reply-To: <20180927235653.GA18250@shipstone.jp> References: <20180927235653.GA18250@shipstone.jp> Message-ID: On Thu, Sep 27, 2018 at 7:57 PM Emmet Hikory wrote: > > Please join me in congratulating the 6 newly elected members of the > Technical Committee (TC): > > - Doug Hellmann (dhellmann) > - Julia Kreger (TheJulia) > - Jeremy Stanley (fungi) Welcome back! > - Jean-Philippe Evrard (evrardjp) > - Lance Bragstad (lbragstad) > - Ghanshyam Mann (gmann) ..and welcome to the TC :) > > Full Results: > https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f773fda2d0695864 > > Election process details and results are also available here: > https://governance.openstack.org/election/ > > Thank you to all of the candidates, having a good group of candidates helps > engage the community in our democratic process. A big thank you to our election team who oversees all of this as well :) > Thank you to all who voted and who encouraged others to vote. Voter turnout > was significantly up from recent cycles. We need to ensure your voices are > heard. > > -- > Emmet HIKORY > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From fungi at yuggoth.org Fri Sep 28 00:19:58 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 28 Sep 2018 00:19:58 +0000 Subject: [openstack-dev] [all][tc][elections] Stein TC Election Results In-Reply-To: References: <20180927235653.GA18250@shipstone.jp> Message-ID: <20180928001957.kaeqro62esqgihep@yuggoth.org> On 2018-09-27 20:00:42 -0400 (-0400), Mohammed Naser wrote: [...] > A big thank you to our election team who oversees all of this as > well :) [...] I wholeheartedly concur! And an even bigger thank you to the 5 candidates who were not elected this term; please run again in the next election if you're able, I think every one of you would have made a great choice for a seat on the OpenStack TC. Our community is really lucky to have so many qualified people eager to take on governance tasks. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amy at demarco.com Fri Sep 28 00:42:03 2018 From: amy at demarco.com (Amy) Date: Thu, 27 Sep 2018 19:42:03 -0500 Subject: [openstack-dev] [all][tc][elections] Stein TC Election Results In-Reply-To: <20180927235653.GA18250@shipstone.jp> References: <20180927235653.GA18250@shipstone.jp> Message-ID: <9B98BEDC-BF8B-4DF6-8717-ED61B41E5066@demarco.com> Congrats all! And for those of you who ran and we’re not elected thank you for all you do in the community! Amy (spotz) Sent from my iPhone > On Sep 27, 2018, at 6:56 PM, Emmet Hikory wrote: > > Please join me in congratulating the 6 newly elected members of the > Technical Committee (TC): > > - Doug Hellmann (dhellmann) > - Julia Kreger (TheJulia) > - Jeremy Stanley (fungi) > - Jean-Philippe Evrard (evrardjp) > - Lance Bragstad (lbragstad) > - Ghanshyam Mann (gmann) > > Full Results: > https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f773fda2d0695864 > > Election process details and results are also available here: > https://governance.openstack.org/election/ > > Thank you to all of the candidates, having a good group of candidates helps > engage the community in our democratic process. > > Thank you to all who voted and who encouraged others to vote. Voter turnout > was significantly up from recent cycles. We need to ensure your voices are > heard. > > -- > Emmet HIKORY > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Fri Sep 28 01:19:46 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 27 Sep 2018 21:19:46 -0400 Subject: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration In-Reply-To: References: <20180927022738.GA22304@rcp.sl.cloud9.ibm.com> Message-ID: <5d7129ce-39c8-fb9e-517c-64d030f71963@redhat.com> On 27/09/18 7:00 PM, Duc Truong wrote: > On Thu, Sep 27, 2018 at 11:14 AM Zane Bitter wrote: >> >>> and we will gradually fade out the existing 'AutoScalingGroup' >>> and related resource types in Heat. >> >> That's almost impossible to do without breaking existing users. > > One approach would be to switch the underlying Heat AutoScalingGroup > implementation to use Senlin and then deprecate the AutoScalingGroup > resource type in favor of the Senlin resource type over several > cycles. The hard part (or one hard part, at least) of that is migrating the existing data. > Not saying that this is the definitive solution, but it is > worth discussing as an option since this follows a path other projects > have taken (e.g. nova-volume extraction into cinder). +1, *definitely* worth discussing. > A prerequisite to this approach would probably require Heat to create > the so-called common library to house the autoscaling code. Then > Senlin would need to achieve feature parity against this autoscaling > library before the switch could happen. > >> >> Clearly there are _some_ parts that could in principle be shared. (I >> added some comments to the etherpad to clarify what I think Rico was >> referring to.) >> >> It seems to me that there's value in discussing it together rather than >> just working completely independently, even if the outcome of that >> discussion is that > > +1. The outcome of any discussion will be beneficial not only to the > teams but also the operators and users. > > Regards, > > Duc (dtruong) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From juliaashleykreger at gmail.com Fri Sep 28 01:38:44 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 27 Sep 2018 18:38:44 -0700 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) In-Reply-To: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> References: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> Message-ID: Greetings! I suspect the avenue of at least three different specs is likely going to be the best path forward and likely what will be required for each project to fully understand how/what/why. From my point of view, I'm quite interested in this from a Nova point of view because that is the initial user interaction point for majority of activities. I'm also wondering if this is virt driver specific, or if it can be applied to multiple virt drivers in the nova tree, since each virt driver has varying constraints. So maybe the best path forward is something nova centric to start? -Julia On Thu, Sep 27, 2018 at 10:36 AM Markus Hentsch wrote: > > Dear OpenStack developers, > > we would like to propose the introduction of an encrypted image format > in OpenStack. We already created a basic implementation involving Nova, > Cinder, OSC and Glance, which we'd like to contribute. > > We originally created a full spec document but since the official > cross-project contribution workflow in OpenStack is a thing of the past, > we have no single repository to upload it to. Thus, the Glance team > advised us to post this on the mailing list [1]. > > Ironically, Glance is the least affected project since the image > transformation processes affected are taking place elsewhere (Nova and > Cinder mostly). > > Below you'll find the most important parts of our spec that describe our > proposal - which our current implementation is based on. We'd love to > hear your feedback on the topic and would like to encourage all affected > projects to join the discussion. > > Subsequently, we'd like to receive further instructions on how we may > contribute to all of the affected projects in the most effective and > collaborative way possible. The Glance team suggested starting with a > complete spec in the glance-specs repository, followed by individual > specs/blueprints for the remaining projects [1]. Would that be alright > for the other teams? > > [1] > http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-09-27-14.00.log.html > > Best regards, > Markus Hentsch > [trim] From sxmatch1986 at gmail.com Fri Sep 28 01:50:51 2018 From: sxmatch1986 at gmail.com (hao wang) Date: Fri, 28 Sep 2018 09:50:51 +0800 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) In-Reply-To: References: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> Message-ID: +1 to Julia's suggestion, Cinder should also have a spec to discuss the detail about how to implement the creation of volume from an encrypted image. Julia Kreger 于2018年9月28日周五 上午9:39写道: > > Greetings! > > I suspect the avenue of at least three different specs is likely going > to be the best path forward and likely what will be required for each > project to fully understand how/what/why. From my point of view, I'm > quite interested in this from a Nova point of view because that is the > initial user interaction point for majority of activities. I'm also > wondering if this is virt driver specific, or if it can be applied to > multiple virt drivers in the nova tree, since each virt driver has > varying constraints. So maybe the best path forward is something nova > centric to start? > > -Julia > > On Thu, Sep 27, 2018 at 10:36 AM Markus Hentsch > wrote: > > > > Dear OpenStack developers, > > > > we would like to propose the introduction of an encrypted image format > > in OpenStack. We already created a basic implementation involving Nova, > > Cinder, OSC and Glance, which we'd like to contribute. > > > > We originally created a full spec document but since the official > > cross-project contribution workflow in OpenStack is a thing of the past, > > we have no single repository to upload it to. Thus, the Glance team > > advised us to post this on the mailing list [1]. > > > > Ironically, Glance is the least affected project since the image > > transformation processes affected are taking place elsewhere (Nova and > > Cinder mostly). > > > > Below you'll find the most important parts of our spec that describe our > > proposal - which our current implementation is based on. We'd love to > > hear your feedback on the topic and would like to encourage all affected > > projects to join the discussion. > > > > Subsequently, we'd like to receive further instructions on how we may > > contribute to all of the affected projects in the most effective and > > collaborative way possible. The Glance team suggested starting with a > > complete spec in the glance-specs repository, followed by individual > > specs/blueprints for the remaining projects [1]. Would that be alright > > for the other teams? > > > > [1] > > http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-09-27-14.00.log.html > > > > Best regards, > > Markus Hentsch > > > [trim] > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ramishra at redhat.com Fri Sep 28 04:17:19 2018 From: ramishra at redhat.com (Rabi Mishra) Date: Fri, 28 Sep 2018 09:47:19 +0530 Subject: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration In-Reply-To: References: <20180927022738.GA22304@rcp.sl.cloud9.ibm.com> Message-ID: On Thu, Sep 27, 2018 at 11:45 PM Zane Bitter wrote: > On 26/09/18 10:27 PM, Qiming Teng wrote: > > Heat still has a *lot* of users running very important stuff on Heat > scaling group code which, as you know, is burdened by a lot of technical > debt. > > Though I agree that a common library that can be used by both projects would be really good, I still don't understand what user issues (though the resource implementations are not the best, they actually work) we're trying to address here. As far as duplicated effort is concerned (that's the only justification I could get from the etherpad), possibly senlin duplicated some stuff expecting to replace heat implementation in time. Also, we've not made any feature additions to heat group resources since long time (expecting senlin to do it instead) and I've not seen any major bugs reported by users. May be we're talking about duplicated effort in the "future", now that we have changed plans for heat ASG?;) >> What will be great if we can build common library cross projects, and use > >> that common library in both projects, make sure we have all improvement > >> implemented in that library, finally to use Senlin from that from that > >> library call in Heat autoscaling group. And in long-term, we gonna let > all > > > > +1 - to expand on Rico's example, we have at least 3 completely separate > implementations of batching, each supporting different actions: > > Heat AutoscalingGroup: updates only > Heat ResourceGroup: create or update > Senlin Batch Policy: updates only > > and users are asking for batch delete as well. > I've seen this request a few times. But, what I wonder is "why a user would want to do a delete in a controlled batched manner"? The only justifications provided is that "they want to throttle requests to other services, as those services are not able to handle large concurrent requests sent by heat properly". Are we not looking at the wrong place to fix those issues? IMHO, a good list of user issues on the mentioned etherpad would really help justify the effort needed. This is clearly an area > where technical debt from duplicate implementations is making it hard to > deliver value to users. > > cheers, > Zane. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Fri Sep 28 07:14:07 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Fri, 28 Sep 2018 09:14:07 +0200 Subject: [openstack-dev] [cinder][puppet][kolla][helm][ansible] Change in Cinder backup driver naming In-Reply-To: <20180927183328.GA23767@sm-workstation> References: <20180927183328.GA23767@sm-workstation> Message-ID: <0ff539ef-023e-39bc-e907-473a194fc2fe@binero.se> Thanks Sean! I did a quick sanity check on the backup part in the puppet-cinder module and there is no opinionated default value there which needs to be changed. Best regards On 09/27/2018 08:37 PM, Sean McGinnis wrote: > This probably applies to all deployment tools, so hopefully this reaches the > right folks. > > In Havana, Cinder deprecated the use of specifying the module for configuring > backup drivers. Patch https://review.openstack.org/#/c/595372/ finally removed > the backwards compatibility handling for configs that still used the old way. > > Looking through a quick search, it appears there may be some tools that are > still defaulting to setting the backup driver name using the patch. If your > project does not specify the full driver class path, please update these to do > so now. > > Any questions, please reach out here or in the #openstack-cinder channel. > > Thanks! > Sean > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From nakamura.tetsuro at lab.ntt.co.jp Fri Sep 28 07:16:06 2018 From: nakamura.tetsuro at lab.ntt.co.jp (TETSURO NAKAMURA) Date: Fri, 28 Sep 2018 16:16:06 +0900 Subject: [openstack-dev] [placement] Tetsuro Nakamura now core In-Reply-To: References: Message-ID: <45c77743-cfd4-59ca-b412-45bdd60941e9@lab.ntt.co.jp> Hi all, Thank you for putting your trust in me. It's my pleasure to work with you and to support the community. Thanks! On 2018/09/27 18:47, Chris Dent wrote: > > Since there were no objections and a week has passed, I've made > Tetsuro a member of placement-core. > > Thanks for your willingness and continued help. Use your powers > wisely. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Tetsuro Nakamura NTT Network Service Systems Laboratories TEL:0422 59 6914(National)/+81 422 59 6914(International) 3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan From hyangii at gmail.com Fri Sep 28 07:41:59 2018 From: hyangii at gmail.com (Jae Sang Lee) Date: Fri, 28 Sep 2018 16:41:59 +0900 Subject: [openstack-dev] [rally] How is the docker image of rally-openstack managed? Message-ID: Hi guys, Last week I posted a commit at rally-openstack to work with mysql and postgres in rally docker image.(c8272e8591f812ced9c2f7ebdad6abca5c160dbf) mysql and postgres. At the docker hub, the latest tag is pushed 3 months ago and is no longer being updated. I would like to use the official rally-openstack docker image with mysql support in openstack-helm rally. How is the xrally-openstack docker image managed? Thanks. Jaesang -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Sep 28 08:59:16 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 28 Sep 2018 10:59:16 +0200 Subject: [openstack-dev] [oslo][castellan] Time for a 1.0 release? In-Reply-To: <1538075598.6608.136.camel@redhat.com> References: <8bab2939-ae16-31f3-8191-2cb1e81bc9df@nemebean.com> <1538075598.6608.136.camel@redhat.com> Message-ID: <408527c4-9db2-92ba-9a05-c682bd3a2caf@openstack.org> Ade Lee wrote: > On Tue, 2018-09-25 at 16:30 -0500, Ben Nemec wrote: >> Doug pointed out on a recent Oslo release review that castellan is >> still >> not officially 1.0. Given the age of the project and the fact that >> we're >> asking people to deploy a Castellan-compatible keystore as one of >> the >> base services, it's probably time to address that. >> >> To that end, I'm sending this to see if anyone is aware of any >> reasons >> we shouldn't go ahead and tag a 1.0 of Castellan. >> > > + 1 +1 Propose it and we can continue the discussion on the review :) -- Thierry Carrez (ttx) From sbauza at redhat.com Fri Sep 28 09:11:19 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 28 Sep 2018 11:11:19 +0200 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> Message-ID: On Fri, Sep 28, 2018 at 12:50 AM melanie witt wrote: > On Thu, 27 Sep 2018 17:23:26 -0500, Matt Riedemann wrote: > > On 9/27/2018 3:02 PM, Jay Pipes wrote: > >> A great example of this would be the proposed "deploy template" from > >> [2]. This is nothing more than abusing the placement traits API in order > >> to allow passthrough of instance configuration data from the nova flavor > >> extra spec directly into the nodes.instance_info field in the Ironic > >> database. It's a hack that is abusing the entire concept of the > >> placement traits concept, IMHO. > >> > >> We should have a way *in Nova* of allowing instance configuration > >> key/value information to be passed through to the virt driver's spawn() > >> method, much the same way we provide for user_data that gets exposed > >> after boot to the guest instance via configdrive or the metadata service > >> API. What this deploy template thing is is just a hack to get around the > >> fact that nova doesn't have a basic way of passing through some collated > >> instance configuration key/value information, which is a darn shame and > >> I'm really kind of annoyed with myself for not noticing this sooner. :( > > > > We talked about this in Dublin through right? We said a good thing to do > > would be to have some kind of template/profile/config/whatever stored > > off in glare where schema could be registered on that thing, and then > > you pass a handle (ID reference) to that to nova when creating the > > (baremetal) server, nova pulls it down from glare and hands it off to > > the virt driver. It's just that no one is doing that work. > > If I understood correctly, that discussion was around adding a way to > pass a desired hardware configuration to nova when booting an ironic > instance. And that it's something that isn't yet possible to do using > the existing ComputeCapabilitiesFilter. Someone please correct me if I'm > wrong there. > > That said, I still don't understand why we are talking about deprecating > the ComputeCapabilitiesFilter if there's no supported way to replace it > yet. If boolean traits are not enough to replace it, then we need to > hold off on deprecating it, right? Would the > template/profile/config/whatever in glare approach replace what the > ComputeCapabilitiesFilter is doing or no? Sorry, I'm just not clearly > understanding this yet. > > I just feel some new traits have to be defined, like Jay said, and some work has to be done on the Ironic side to make sure they expose them as traits and not by the old way. That leaves tho a question : does Ironic support custom capabilities ? If so, that leads to Jay's point about the key/pair information that's not intented for traits. If we all agree on the fact that traits shouldn't be allowed for key/value pairs, could we somehow imagine Ironic to change the customization mechanism to be boolean only ? Also, I'm a bit confused whether operators make use of Ironic capabilities for fancy operational queries, like the ones we have in https://github.com/openstack/nova/blob/3716752/nova/scheduler/filters/extra_specs_ops.py#L24-L35 and if Ironic correctly documents how to put such things into traits ? (eg. say CUSTOM_I_HAVE_MORE_THAN_2_GPUS) All of the above makes me a bit worried by a possible ComputeCapabilitiesFilter deprecation, if we aren't yet able to provide a clear upgrade path for our users. -Sylvain -melanie > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shu.mutow at gmail.com Fri Sep 28 09:24:04 2018 From: shu.mutow at gmail.com (Shu M.) Date: Fri, 28 Sep 2018 18:24:04 +0900 Subject: [openstack-dev] [horizon][plugins] npm jobs fail due to new XStatic-jQuery release (was: Horizon gates are broken) In-Reply-To: References: Message-ID: Hi Ivan, Thank you for your help to our plugins and sorry for bothering you. I found problem on installing horizon in "post-install", e.g. we should install horizon with upper-constraints.txt in "post-install". I proposed patch[1] in zun-ui, please check it. If we can merge this, I will expand it the other remaining plugins. [1] https://review.openstack.org/#/c/606010/ Thanks, Shu Muto 2018年9月28日(金) 3:34 Ivan Kolodyazhny : > Hi, > > Unfortunately, this issue affects some of the plugins too :(. At least > gates for the magnum-ui, senlin-dashboard, zaqar-ui and zun-ui are broken > now. I'm working both with project teams to fix it asap. Let's wait if [5] > helps for senlin-dashboard and fix all the rest of plugins. > > > [5] https://review.openstack.org/#/c/605826/ > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > > On Wed, Sep 26, 2018 at 4:50 PM Ivan Kolodyazhny wrote: > >> Hi all, >> >> Patch [1] is merged and our gates are un-blocked now. I went throw >> review list and post 'recheck' where it was needed. >> >> We need to cherry-pick this fix to stable releases too. I'll do it asap >> >> Regards, >> Ivan Kolodyazhny, >> http://blog.e0ne.info/ >> >> >> On Mon, Sep 24, 2018 at 11:18 AM Ivan Kolodyazhny wrote: >> >>> Hi team, >>> >>> Unfortunately, horizon gates are broken now. We can't merge any patch >>> due to the -1 from CI. >>> I don't want to disable tests now, that's why I proposed a fix [1]. >>> >>> We'd got released some of XStatic-* packages last week. At least new >>> XStatic-jQuery [2] breaks horizon [3]. I'm working on a new job for >>> requirements repo [4] to prevent such issues in the future. >>> >>> Please, do not try 'recheck' until [1] will be merged. >>> >>> [1] https://review.openstack.org/#/c/604611/ >>> [2] https://pypi.org/project/XStatic-jQuery/#history >>> [3] https://bugs.launchpad.net/horizon/+bug/1794028 >>> [4] https://review.openstack.org/#/c/604613/ >>> >>> Regards, >>> Ivan Kolodyazhny, >>> http://blog.e0ne.info/ >>> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergely.csatari at nokia.com Fri Sep 28 09:54:11 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Fri, 28 Sep 2018 09:54:11 +0000 Subject: [openstack-dev] [ironic][edge] Notes from the PTG In-Reply-To: <567890A7-3685-4C95-839C-C947AFDC07FB@windriver.com> References: <3A5527BB-7E4E-48DF-9AD1-0D42C64B6106@windriver.com> <567890A7-3685-4C95-839C-C947AFDC07FB@windriver.com> Message-ID: Hi Jim, Thanks for sharing your notes. One note about the jumping automomus control plane requirement. This requirement was already identified during the Dublin PTG workshop [1]. This is needed for two reasons the edge cloud instance should stay operational even if there is a network break towards other edge cloud instances and the edge cloud instance should work together with other edge cloud instances running other version of the control plane. In Denver we deided to leave out these requirements form the MVP architecture discussions. Br, Gerg0 [1]: https://wiki.openstack.org/w/index.php?title=OpenStack_Edge_Discussions_Dublin_PTG From: Jim Rollenhagen > Reply-To: "openstack-dev at lists.openstack.org" > Date: Wednesday, September 19, 2018 at 10:49 AM To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [ironic][edge] Notes from the PTG I wrote up some notes from my perspective at the PTG for some internal teams and figured I may as well share them here. They're primarily from the ironic and edge WG rooms. Fairly raw, very long, but hopefully useful to someone. Enjoy. Tuesday: edge Edge WG (IMHO) has historically just talked about use cases, hand-waved a bit, and jumped to requiring an autonomous control plane per edge site - thus spending all of their time talking about how they will make glance and keystone sync data between control planes. penick described roughly what we do with keystone/athenz and how that can be used in a federated keystone deployment to provide autonomy for any control plane, but also a single view via a global keystone. penick and I both kept pushing for people to define a real architecture, and we ended up with 10-15 people huddled around an easel for most of the afternoon. Of note: - Windriver (and others?) refuse to budge on the many control plane thing - This means that they will need some orchestration tooling up top in the main DC / client machines to even come close to reasonably managing all of these sites - They will probably need some syncing tooling - glance->glance isn’t a thing, no matter how many people say it is. - Glance PTL recommends syncing metadata outside of glance process, and a global(ly distributed?) glance backend. - We also defined the single pane of glass architecture that Oath plans to deploy - Okay with losing connectivity from central control plane to single edge site - Each edge site is a cell - Each far edge site is just compute nodes - Still may want to consider image distribution to edge sites so we don’t have to go back to main DC? - Keystone can be distributed the same as first architecture - Nova folks may start investigating putting API hosts at the cell level to get the best of both worlds - if there’s a network partition, can still talk to cell API to manage things - Need to think about removing the need for rabbitmq between edge and far edge - Kafka was suggested in the edge room for oslo.messaging in general - Etcd watchers may be another option for an o.msg driver - Other other options are more invasive into nova - involve changing how nova-compute talks to conductor (etcd, etc) or even putting REST APIs in nova-compute (and nova-conductor?) - Neutron is going to work on an OVS “superagent” - superagent does the RPC handling, talks some other way to child agents. Intended to scale to thousands of children. Primary use case is smart nics but seems like a win for the edge case as well. penick took an action item to draw up the architecture diagrams in a digestable format. Wednesday: ironic things Started with a retrospective. See https://etherpad.openstack.org/p/ironic-stein-ptg-retrospective for the notes - there wasn’t many surprising things here. We did discuss trying to target some quick wins for the beginning of the cycle, so that we didn’t have all of our features trying to land at the end. Using wsgi with the ironic-api was mentioned as a potential regression, but we agreed it’s a config/documentation issue. I took an action to make a task to document this better. Next we quickly reviewed our vision doc, and people didn’t have much to say about it. Metalsmith: it’s a thing, it’s being included into the ironic project. Dmitry is open to optionally supporting placement. Multiple instances will be a feature in the future. Otherwise mostly feature complete, goal is to keep it simple. Networking-ansible: redhat building tooling that integrates with upstream ansible modules for networking gear. Kind of an alternative to n-g-s. Not really much on plans here, RH just wanted to introduce it to the community. Some discussion about it possibly replacing n-g-s later, but no hard plans. Deploy steps/templates: we talked about what the next steps are, and what an MVP looks like. Deploy templates are triggered by the traits that nodes are scheduled against, and can add steps before or after (or in between?) the default deploy steps. We agreed that we should add a RAID deploy step, with standing questions for how arguments are passed to that deploy step, and what the defaults look like. Myself and mgoddard took an action item to open an RFE for this. We also agreed that we should start thinking about how the current (only) deploy step should be split into multiple steps. Graphical console: we discussed what the next steps are for this work. We agreed that we should document the interface and what is returned (a URL), and also start working on a redfish driver for graphical consoles. We also noted that we can test in the gate with qemu, but we only need to test that a correct URL is returned, not that the console actually works (because we don’t really care that qemu’s console works). Python 3: we talked about the changes to our jobs that are needed. We agreed to use the base name of the jobs for Python 3 (as those will be used for a long time), and add a “python2” prefix for the Python 2 jobs. We also discussed dropping certain coverage for Python 2, as our CI jobs tend to mostly test the same codepaths with some config differences. Last, we talked about mixed environment Python 2 and 3 testing, as this will be a thing people doing rolling upgrades of Python versions will hit. I sent an email to the ML asking if others had done or thought about this, and it sounds like we can limit that testing to oslo.messaging, and a task was reported there. Pre-upgrade checks: Not much was discussed here; TheJulia is going to look into it. One item of note is that there is an oslo project being proposed that can carry some of the common code for this. Performance improvements: We first discussed our virt driver’s performance. It was found that Nova’s power sync loop makes a call to Ironic for each instance that the compute service is managing. We do some node caching in our driver that would be useful for this. I took an action item to look into it, and have a WIP patch: https://review.openstack.org/#/c/602127/ . That patch just needs a bug filed and unit tests written. On Thursday, we talked with Nova about other performance things, and agreed we should implement a hook in Nova that Ironic can do to say “power changed” and “deploy done” and other things like this. This will help reduce or eliminate polling from our virt driver to Ironic, and also allow Nova to notice these changes faster. More on that later? Splitting the conductor: we discussed the many tasks the conductor is responsible for, and pondered if we could or should split things up. This has implications (good and bad) for operability, scalability, and security. Splitting the conductor to multiple workers would allow operators to use different security models for different tasks (e.g. only allowing an “OOB worker” access to the OOB network). It would also allow folks to scale out workers that do lots of work (like the power status loop) separately from those that do minimal work (writing PXE configs). I intend to investigate this more during this cycle and lay out a plan for doing the work. This also may require better distributed locking, which TheJulia has started investigating. Changing boot mode defaults: Apparently Intel is going to stop shipping hardware that is capable of legacy BIOS booting in 2020. We agreed that we should work toward changing the default boot mode to UEFI to better prepare our users, but we can’t drop legacy BIOS mode until all of the old hardware in the world is gone. TheJulia is going to dig through the code and make a task list. UEFI HTTPClient booting: This is a DHCP class that allows the DHCP server to return a URL instead of a “next-server” (TFTP location) response. This is a clear value add, and TheJulia is going to work on it as she is already neck deep in that area of code. We also need to ensure that Neutron supports this. It should, as it’s just more DHCP options, but we need to verify. SecureBoot: I presented Oath’s secureboot model, which doesn’t depend on a centralized attestation server. It made sense to people, and we discussed putting the driver in tree. The process does rely on some enhancements to iPXE, so Oath is going to investigate upstreaming those changes and publishing more documentation, and then an in-tree driver should be no problem. We also discussed Ironic’s current SecureBoot (TrustedBoot?) implementations. Currently it only works with PXE, not iPXE or Grub2. TheJulia is going to look into adding this support. We should be able to do CI jobs for it, as TPM 1.2 and 2.0 emulation both seem to be supported in QEMU as of 2.11. NIC PXE configuration as a clean step: the DRAC driver team has a desire to configure NICs for PXE or not, and sync with the ironic database’s pxe_enabled field. This has gone back and forth in IRC. We were able to resolve some of the issues with it, and rpioso is going to write a small spec to make sure we get the details right. Thursday: more ironic things Neutron cross-project discussion: we discussed SmartNICs, which the Neutron team had also discussed the previous day. In short, SmartNICs are NICs that run OVS. The Neutron team discussed the scalability of their OVS agent running across thousands of machines, and are planning to make some sort of “superagent”. This superagent essentially owns a group of OVS agents. It will talk to Neutron over rabbit as usual, but then use some other protocol to talk to the OVS agents it is managing. This should help with rabbit load even in “standard” Openstack environments, and is especially useful (to me) for minimizing rabbitmq connections from far edge sites. The catch with SmartNICs and Ironic is that the NICs must have power to be configured (and thus the machine must be on). This breaks our general model of “only configure networking with the machine off, to make sure we don’t cross streams between tenants and control plane”. We came to a decent compromise (I think), and agreed to continue in the ironic spec, and revisit the topic in Berlin. Federation: we discussed federation and people seemed interested, however I don’t believe we made any real progress toward getting it done. There’s still a debate whether this should be something in Ironic itself, or if there should just be some sort of proxy layer in front of multiple Ironic environments. To be continued in the spec. Agent polling: we discussed the spec to drop communication from IPA to the conductor. It seems like nobody has major issues with it, and the spec just needs some polishing before landing. L3 deployments: We brought this up, and again there seems to be little contention. I ended up approving the spec shortly after. Neutron event processing: This work has been hanging for years and not getting done. Some folks wondered if we should just poll Neutron, if that gets the work done more quickly. Others wondered if we should even care about it at all (we should). TheJulia is going to follow up with dtantsur and vdrok to see if we can get someone to mainline some caffeine and just get it done. CMDB: Oath and CERN presented their work toward speccing out a CMDB application that can integrate with Ironic. We discussed the problems that they are trying to solve and agreed they need solving. We also agreed that strict schema is better than blobjects (© jaypipes). We agreed it probably doesn’t need to be in Ironic governance, but could be one day. The next steps are to start hacking in a new repo in the OpenStack infrastructure, and propose specs for any Ironic integration that is needed. Red Hat and Dell contributors also showed interest in the project and volunteered to help. Some folks are going to try and talk to the wider OpenStack community to find out if there’s interest or needs from projects like Nova/Neutron/Cinder, etc. Stein goals: We put together a list of goals and voted on them. Julia has since proposed the patch to document them: https://review.openstack.org/#/c/603161/ Last thing Thursday: Cross-project discussions with Nova. Summarized here, but lots of detail in the etherpad under the Ironic section: https://etherpad.openstack.org/p/nova-ptg-stein Power sync: We discussed some problems CERN has with the instance power sync (Rackspace also saw these problems). In short, nova asserts power state if the instance “should” be off but the power is turned on out-of-band. Operators definitely need to be aware of this when doing maintenance on active machines, but we also discussed Ironic calling back to Nova when Ironic knows that the power state has been updated (via Ironic API, etc). I volunteered to look at this, and dansmith volunteered to help out. API heaviness: We discussed how many API calls our virt driver does. As mentioned earlier, I proposed a patch to make the power sync loop more lightweight. There’s also lots of polling for tasks like deploy and rescue, which we can dramatically reduce with a callback from Ironic to Nova. I also volunteered to investigate this, and dansmith again agreed to help. Compute host grouping: Ironic now has a mechanism for grouping conductors to nodes, and we want to mirror that in Nova. We discussed how to take the group as a config option and be able to find the other compute services managing that group, so we can build the hash ring correctly. We concluded that it’s a really hard problem (TM), and agreed to also add a config option like “peer_list” that can be used to list other compute services in the same group. This can be read dynamically each time we build the hash ring, or can be a mutable config with updates triggered by a SIGHUP. We’ll hash out the details in a blueprint or spec. Again, I agreed to begin the work, and dansmith agreed to help. Capabilities filter: This was the last topic. It’s been on the chopping block for ages, but we are just now reaching the point where it can be properly deprecated. We discussed the plan, and mostly agreed it was good enough. johnthetubaguy is going to send the plan wider and make sure it will work for folks. We also discussed modeling countable resources on Ironic resource providers, which will work as long as there is still some resource class with an inventory of one, like we have today. Some folks may investigate doing this, but it’s fuzzy how much people care or if we really need/want to do it. Friday: kind of bummed around the Ironic and TC rooms. Lots of interesting discussions, but nothing I feel like writing about here (as Ironic conversations were things like code deep-dives not worth communicating widely, and the TC topics have been written about to death). // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From sombrafam at gmail.com Fri Sep 28 11:51:46 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Fri, 28 Sep 2018 08:51:46 -0300 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) In-Reply-To: References: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> Message-ID: I don't know if our workflow supports this, but it would be nice to have a place to place cross-projec changes like that (something like, openstack-cross-projects-specs), and use that as a initial point for high level discussions. But for now, you can start creating specs for the projects involved. When you do so, please bring the topic to the project weekly meetings[1][2][3] so you can get some attention and feedback. Erlon _______________ [1] https://wiki.openstack.org/wiki/Meetings/Glance [2] https://wiki.openstack.org/wiki/Meetings/Nova [3] https://wiki.openstack.org/wiki/CinderMeetings Em qui, 27 de set de 2018 às 22:51, hao wang escreveu: > +1 to Julia's suggestion, Cinder should also have a spec to discuss > the detail about how to implement the creation of volume from an > encrypted image. > Julia Kreger 于2018年9月28日周五 上午9:39写道: > > > > Greetings! > > > > I suspect the avenue of at least three different specs is likely going > > to be the best path forward and likely what will be required for each > > project to fully understand how/what/why. From my point of view, I'm > > quite interested in this from a Nova point of view because that is the > > initial user interaction point for majority of activities. I'm also > > wondering if this is virt driver specific, or if it can be applied to > > multiple virt drivers in the nova tree, since each virt driver has > > varying constraints. So maybe the best path forward is something nova > > centric to start? > > > > -Julia > > > > On Thu, Sep 27, 2018 at 10:36 AM Markus Hentsch > > wrote: > > > > > > Dear OpenStack developers, > > > > > > we would like to propose the introduction of an encrypted image format > > > in OpenStack. We already created a basic implementation involving Nova, > > > Cinder, OSC and Glance, which we'd like to contribute. > > > > > > We originally created a full spec document but since the official > > > cross-project contribution workflow in OpenStack is a thing of the > past, > > > we have no single repository to upload it to. Thus, the Glance team > > > advised us to post this on the mailing list [1]. > > > > > > Ironically, Glance is the least affected project since the image > > > transformation processes affected are taking place elsewhere (Nova and > > > Cinder mostly). > > > > > > Below you'll find the most important parts of our spec that describe > our > > > proposal - which our current implementation is based on. We'd love to > > > hear your feedback on the topic and would like to encourage all > affected > > > projects to join the discussion. > > > > > > Subsequently, we'd like to receive further instructions on how we may > > > contribute to all of the affected projects in the most effective and > > > collaborative way possible. The Glance team suggested starting with a > > > complete spec in the glance-specs repository, followed by individual > > > specs/blueprints for the remaining projects [1]. Would that be alright > > > for the other teams? > > > > > > [1] > > > > http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-09-27-14.00.log.html > > > > > > Best regards, > > > Markus Hentsch > > > > > [trim] > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Sep 28 12:00:29 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 28 Sep 2018 12:00:29 +0000 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) In-Reply-To: References: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> Message-ID: <20180928120029.d4il2jxxavb5rd6h@yuggoth.org> On 2018-09-28 08:51:46 -0300 (-0300), Erlon Cruz wrote: > I don't know if our workflow supports this, but it would be nice > to have a place to place cross-projec changes like that (something > like, openstack-cross-projects-specs), and use that as a initial > point for high level discussions. But for now, you can start > creating specs for the projects involved. [...] If memory serves, the biggest challenge around that solution was determining who approves such proposals since they still need per-project specs for the project-specific details anyway. Perhaps someone who has recently worked on a feature which required coordination between several teams (but not a majority of teams like our cycle goals process addresses) can comment on what worked for them and what improvements they would make on the process they followed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From markus.hentsch at cloudandheat.com Fri Sep 28 12:08:48 2018 From: markus.hentsch at cloudandheat.com (Markus Hentsch) Date: Fri, 28 Sep 2018 14:08:48 +0200 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) In-Reply-To: References: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> Message-ID: Hello Julia, we will begin formulating an individual spec for each project accordingly. Regarding your question: as you already assumed correctly, the code necessary to handle image decryption is driver specific in our current design as it is very close to the point where the ephemeral storage disk is initialized. Our proposed goal of direct decryption streaming makes it hard to design this in a generic fashion since we can't simply place the decrypted image somewhere temporarily in a generic place and then take it as a base for a driver specific next step, since that'd expose the image data. Best regards, Markus Julia Kreger wrote: > Greetings! > > I suspect the avenue of at least three different specs is likely going > to be the best path forward and likely what will be required for each > project to fully understand how/what/why. From my point of view, I'm > quite interested in this from a Nova point of view because that is the > initial user interaction point for majority of activities. I'm also > wondering if this is virt driver specific, or if it can be applied to > multiple virt drivers in the nova tree, since each virt driver has > varying constraints. So maybe the best path forward is something nova > centric to start? > > -Julia > > On Thu, Sep 27, 2018 at 10:36 AM Markus Hentsch > wrote: >> >> Dear OpenStack developers, >> >> we would like to propose the introduction of an encrypted image format >> in OpenStack. We already created a basic implementation involving Nova, >> Cinder, OSC and Glance, which we'd like to contribute. >> >> We originally created a full spec document but since the official >> cross-project contribution workflow in OpenStack is a thing of the past, >> we have no single repository to upload it to. Thus, the Glance team >> advised us to post this on the mailing list [1]. >> >> Ironically, Glance is the least affected project since the image >> transformation processes affected are taking place elsewhere (Nova and >> Cinder mostly). >> >> Below you'll find the most important parts of our spec that describe our >> proposal - which our current implementation is based on. We'd love to >> hear your feedback on the topic and would like to encourage all affected >> projects to join the discussion. >> >> Subsequently, we'd like to receive further instructions on how we may >> contribute to all of the affected projects in the most effective and >> collaborative way possible. The Glance team suggested starting with a >> complete spec in the glance-specs repository, followed by individual >> specs/blueprints for the remaining projects [1]. Would that be alright >> for the other teams? >> >> [1] >> http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-09-27-14.00.log.html >> >> Best regards, >> Markus Hentsch >> > [trim] > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ** *Markus Hentsch* Head of Cloud Innovation CLOUD&HEAT *CLOUD & HEAT Technologies GmbH* Königsbrücker Str. 96 (Halle 15) | 01099 Dresden Tel: +49 351 479 3670 - 100 Fax: +49 351 479 3670 - 110 E-Mail: markus.hentsch at cloudandheat.com Web: https://www.cloudandheat.com Handelsregister: Amtsgericht Dresden Registernummer: HRB 30549 USt.-Ident.-Nr.: DE281093504 Geschäftsführer: Nicolas Röhrs From josephine.seifert at secustack.com Fri Sep 28 12:14:09 2018 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Fri, 28 Sep 2018 14:14:09 +0200 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) In-Reply-To: References: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> Message-ID: Hi, Am 28.09.2018 um 13:51 schrieb Erlon Cruz: > I don't know if our workflow supports this, but it would be nice to > have a place to > place cross-projec changes like that (something like, > openstack-cross-projects-specs),  > and use that as a initial point for high level discussions. But for > now, you can start creating > specs for the projects involved. There was a repository for cross-project-specs, but it is deprecated: https://github.com/openstack/openstack-specs So we are currently writing specs for each involved project, as suggested. You are right, it would be nice to discuss this topic with people from all involved projects together. > When you do so, please bring the topic to the project weekly > meetings[1][2][3] We actually started with bringing this up in the Glance meeting yesterday. And of course, we would like to discuss our specs in the project meetings. :) Best regards, Josephine (Luzi) From thierry at openstack.org Fri Sep 28 12:17:03 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 28 Sep 2018 14:17:03 +0200 Subject: [openstack-dev] [release] Release model for feature-complete OpenStack libraries Message-ID: <9aa48cc6-4261-5a67-c35e-2cca5272e9ba@openstack.org> Hi everyone, In OpenStack, libraries have to be released with a cycle-with-intermediary model, so that (1) they can be released early and often, (2) services consuming those libraries can take advantage of their new features, and (3) we detect integration bugs early rather than late. This works well while libraries see lots of changes, however it is a bit heavy-handed for feature-complete, stable libraries: it forces those to release multiple times per year even if they have not seen any change. For those, we discussed[1] a number of mechanisms in the past, but at the last PTG we came up with the conclusion that those were a bit complex and not really addressing the issue. Here is a simpler proposal. Once libraries are deemed feature-complete and stable, they should switch them to an "independent" release model (like all our third-party libraries). Those would see releases purely as needed for the occasional corner case bugfix. They won't be released early and often, there is no new feature to take advantage of, and new integration bugs should be very rare. This transition should be definitive in most cases. In rare cases where a library were to need large feature development work again, we'd have two options: develop the new feature in a new library depending on the stable one, or grant an exception and switch it back to cycle-with-intermediary. If one of your libraries should already be considered feature-complete and stable, please contact the release team to transition them to the new release model. Thanks for reading! [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131341.html -- The Release Team From Arkady.Kanevsky at dell.com Fri Sep 28 13:21:13 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Fri, 28 Sep 2018 13:21:13 +0000 Subject: [openstack-dev] [release] Release model for feature-complete OpenStack libraries In-Reply-To: <9aa48cc6-4261-5a67-c35e-2cca5272e9ba@openstack.org> References: <9aa48cc6-4261-5a67-c35e-2cca5272e9ba@openstack.org> Message-ID: <0efe43097b0043519ad1961637e6b647@AUSX13MPS308.AMER.DELL.COM> How will we handle which versions of libraries work together? And which combinations will be run thru CI? -----Original Message----- From: Thierry Carrez Sent: Friday, September 28, 2018 7:17 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [release] Release model for feature-complete OpenStack libraries [EXTERNAL EMAIL] Please report any suspicious attachments, links, or requests for sensitive information. Hi everyone, In OpenStack, libraries have to be released with a cycle-with-intermediary model, so that (1) they can be released early and often, (2) services consuming those libraries can take advantage of their new features, and (3) we detect integration bugs early rather than late. This works well while libraries see lots of changes, however it is a bit heavy-handed for feature-complete, stable libraries: it forces those to release multiple times per year even if they have not seen any change. For those, we discussed[1] a number of mechanisms in the past, but at the last PTG we came up with the conclusion that those were a bit complex and not really addressing the issue. Here is a simpler proposal. Once libraries are deemed feature-complete and stable, they should switch them to an "independent" release model (like all our third-party libraries). Those would see releases purely as needed for the occasional corner case bugfix. They won't be released early and often, there is no new feature to take advantage of, and new integration bugs should be very rare. This transition should be definitive in most cases. In rare cases where a library were to need large feature development work again, we'd have two options: develop the new feature in a new library depending on the stable one, or grant an exception and switch it back to cycle-with-intermediary. If one of your libraries should already be considered feature-complete and stable, please contact the release team to transition them to the new release model. Thanks for reading! [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131341.html -- The Release Team __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jimmy at openstack.org Fri Sep 28 13:24:34 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 28 Sep 2018 08:24:34 -0500 Subject: [openstack-dev] OpenStack Summit Forum Submission Process Extended Message-ID: <5BAE2B92.4030409@openstack.org> Hello Everyone We are extended the Forum Submission process through September 30, 11:59pm Pacific (6:59am GMT). We've already gotten a ton of great submissions, but we want to leave the door open through the weekend in case we have any stragglers. Please submit your topics here: https://www.openstack.org/summit/berlin-2018/call-for-presentations If you'd like to review the submissions to date, you can go to https://www.openstack.org/summit/berlin-2018/vote-for-speakers. There is no voting period, this is just so Forum attendees can review the submissions to date. Thank you! Jimmy From openstack at fried.cc Fri Sep 28 13:25:21 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 28 Sep 2018 08:25:21 -0500 Subject: [openstack-dev] [placement] The "intended purpose" of traits Message-ID: It's time somebody said this. Every time we turn a corner or look under a rug, we find another use case for provider traits in placement. But every time we have to have the argument about whether that use case satisfies the original "intended purpose" of traits. That's only reason I've ever been able to glean: that it (whatever "it" is) wasn't what the architects had in mind when they came up with the idea of traits. We're not even talking about anything that would require changes to the placement API. Just, "Oh, that's not a *capability* - shut it down." Bubble wrap was originally intended as a textured wallpaper and a greenhouse insulator. Can we accept the fact that traits have (many, many) uses beyond marking capabilities, and quit with the arbitrary restrictions? From doug at doughellmann.com Fri Sep 28 13:29:25 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 28 Sep 2018 09:29:25 -0400 Subject: [openstack-dev] [release] Release model for feature-complete OpenStack libraries In-Reply-To: <0efe43097b0043519ad1961637e6b647@AUSX13MPS308.AMER.DELL.COM> References: <9aa48cc6-4261-5a67-c35e-2cca5272e9ba@openstack.org> <0efe43097b0043519ad1961637e6b647@AUSX13MPS308.AMER.DELL.COM> Message-ID: writes: > How will we handle which versions of libraries work together? > And which combinations will be run thru CI? Dependency management will work the same way it does today. Each component (server or library) lists the versions of the dependencies it is compatible with. That information goes into the packages built for the component, and is used to ensure that a compatible version of each dependency is installed when the package is installed. We control what is actually tested by using the upper constraints list managed in the requirements repository. There's more detail about how that list is managed in the project team guide at https://docs.openstack.org/project-team-guide/dependency-management.html Doug From cdent+os at anticdent.org Fri Sep 28 13:39:24 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 28 Sep 2018 14:39:24 +0100 (BST) Subject: [openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv Message-ID: I'm still trying to figure out how to properly create a "modern" (as in zuul v3 oriented) integration test for placement using gabbi and tempest. That work is happening at https://review.openstack.org/#/c/601614/ There was lots of progress made after the last message on this topic http://lists.openstack.org/pipermail/openstack-dev/2018-September/134837.html but I've reached another interesting impasse. >From devstack's standpoint, the way to say "I want to use a tempest plugin" is to set TEMPEST_PLUGINS to alist of where the plugins are. devstack:lib/tempest then does a: tox -evenv-tempest -- pip install -c $REQUIREMENTS_DIR/upper-constraints.txt $TEMPEST_PLUGINS http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_12_58_138163 I have this part working as expected. However, The advice is then to create a new job that has a parent of devstack-tempest. That zuul job runs a variety of tox environments, depending on the setting of the `tox_envlist` var. If you wish to use a `tempest_test_regex` (I do) the preferred tox environment is 'all'. That venv doesn't have the plugin installed, thus no gabbi tests are found: http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683 How do I get my plugin installed into the right venv while still following the guidelines for good zuul behavior? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From lbragstad at gmail.com Fri Sep 28 13:48:46 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 28 Sep 2018 08:48:46 -0500 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: Bumping this thread again and proposing two conventions based on the discussion here. I propose we decide on one of the two following conventions: *::* or *:_* Where is the corresponding service type of the project [0], and is either create, get, list, update, or delete. I think decoupling the method from the policy name should aid in consistency, regardless of the underlying implementation. The HTTP method specifics can still be relayed using oslo.policy's DocumentedRuleDefault object [1]. I think the plurality of the resource should default to what makes sense for the operation being carried out (e.g., list:foobars, create:foobar). I don't mind the first one because it's clear about what the delimiter is and it doesn't look weird when projects have something like: ::: If folks are ok with this, I can start working on some documentation that explains the motivation for this. Afterward, we can figure out how we want to track this work. What color do you want the shed to be? [0] https://service-types.openstack.org/service-types.json [1] https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad wrote: > > On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann > wrote: > >> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt < >> john at johngarbutt.com> wrote ---- >> > tl;dr+1 consistent names >> > I would make the names mirror the API... because the Operator setting >> them knows the API, not the codeIgnore the crazy names in Nova, I certainly >> hate them >> >> Big +1 on consistent naming which will help operator as well as >> developer to maintain those. >> >> > >> > Lance Bragstad wrote: >> > > I'm curious if anyone has context on the "os-" part of the format? >> > >> > My memory of the Nova policy mess...* Nova's policy rules >> traditionally followed the patterns of the code >> > ** Yes, horrible, but it happened.* The code used to have the >> OpenStack API and the EC2 API, hence the "os"* API used to expand with >> extensions, so the policy name is often based on extensions** note most of >> the extension code has now gone, including lots of related policies* Policy >> in code was focused on getting us to a place where we could rename policy** >> Whoop whoop by the way, it feels like we are really close to something >> sensible now! >> > Lance Bragstad wrote: >> > Thoughts on using create, list, update, and delete as opposed to post, >> get, put, patch, and delete in the naming convention? >> > I could go either way as I think about "list servers" in the API.But >> my preference is for the URL stub and POST, GET, etc. >> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad >> wrote:If we consider dropping "os", should we entertain dropping "api", >> too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple >> service types (e.g "compute" or "loadbalancer"). >> > +1The API is known as "compute" in api-ref, so the policy should be >> for "compute", etc. >> >> Agree on mapping the policy name with api-ref as much as possible. Other >> than policy name having 'os-', we have 'os-' in resource name also in nova >> API url like /os-agents, /os-aggregates etc (almost every resource except >> servers , flavors). As we cannot get rid of those from API url, we need to >> keep the same in policy naming too? or we can have policy name like >> compute:agents:create/post but that mismatch from api-ref where agents >> resource url is os-agents. >> > > Good question. I think this depends on how the service does policy > enforcement. > > I know we did something like this in keystone, which required policy names > and method names to be the same: > > "identity:list_users": "..." > > Because the initial implementation of policy enforcement used a decorator > like this: > > from keystone import controller > > @controller.protected > def list_users(self): > ... > > Having the policy name the same as the method name made it easier for the > decorator implementation to resolve the policy needed to protect the API > because it just looked at the name of the wrapped method. The advantage was > that it was easy to implement new APIs because you only needed to add a > policy, implement the method, and make sure you decorate the implementation. > > While this worked, we are moving away from it entirely. The decorator > implementation was ridiculously complicated. Only a handful of keystone > developers understood it. With the addition of system-scope, it would have > only become more convoluted. It also enables a much more copy-paste pattern > (e.g., so long as I wrap my method with this decorator implementation, > things should work right?). Instead, we're calling enforcement within the > controller implementation to ensure things are easier to understand. It > requires developers to be cognizant of how different token types affect the > resources within an API. That said, coupling the policy name to the method > name is no longer a requirement for keystone. > > Hopefully, that helps explain why we needed them to match. > > >> >> Also we have action API (i know from nova not sure from other services) >> like POST /servers/{server_id}/action {addSecurityGroup} and their current >> policy name is all inconsistent. few have policy name including their >> resource name like "os_compute_api:os-flavor-access:add_tenant_access", few >> has 'action' in policy name like >> "os_compute_api:os-admin-actions:reset_state" and few has direct action >> name like "os_compute_api:os-console-output" >> > > Since the actions API relies on the request body and uses a single HTTP > method, does it make sense to have the HTTP method in the policy name? It > feels redundant, and we might be able to establish a convention that's more > meaningful for things like action APIs. It looks like cinder has a similar > pattern [0]. > > [0] > https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action > > >> >> May be we can make them consistent with >> :: or any better opinion. >> >> > From: Lance Bragstad > The topic of having >> consistent policy names has popped up a few times this week. >> > >> > I would love to have this nailed down before we go through all the >> policy rules again. In my head I hope in Nova we can go through each policy >> rule and do the following: >> > * move to new consistent policy name, deprecate existing name* >> hardcode scope check to project, system or user** (user, yes... keypairs, >> yuck, but its how they work)** deprecate in rule scope checks, which are >> largely bogus in Nova anyway* make read/write/admin distinction** therefore >> adding the "noop" role, amount other things >> >> + policy granularity. >> >> It is good idea to make the policy improvement all together and for all >> rules as you mentioned. But my worries is how much load it will be on >> operator side to migrate all policy rules at same time? What will be the >> deprecation period etc which i think we can discuss on proposed spec - >> https://review.openstack.org/#/c/547850 > > > Yeah, that's another valid concern. I know at least one operator has > weighed in already. I'm curious if operators have specific input here. > > It ultimately depends on if they override existing policies or not. If a > deployment doesn't have any overrides, it should be a relatively simple > change for operators to consume. > > >> >> >> -gmann >> >> > Thanks,John >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Sep 28 13:49:32 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 28 Sep 2018 08:49:32 -0500 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: Adding the operator list back in. On Fri, Sep 28, 2018 at 8:48 AM Lance Bragstad wrote: > Bumping this thread again and proposing two conventions based on the > discussion here. I propose we decide on one of the two following > conventions: > > *::* > > or > > *:_* > > Where is the corresponding service type of the project [0], > and is either create, get, list, update, or delete. I think > decoupling the method from the policy name should aid in consistency, > regardless of the underlying implementation. The HTTP method specifics can > still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > > I think the plurality of the resource should default to what makes sense > for the operation being carried out (e.g., list:foobars, create:foobar). > > I don't mind the first one because it's clear about what the delimiter is > and it doesn't look weird when projects have something like: > > ::: > > If folks are ok with this, I can start working on some documentation that > explains the motivation for this. Afterward, we can figure out how we want > to track this work. > > What color do you want the shed to be? > > [0] https://service-types.openstack.org/service-types.json > [1] > https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule > > On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad > wrote: > >> >> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann >> wrote: >> >>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt < >>> john at johngarbutt.com> wrote ---- >>> > tl;dr+1 consistent names >>> > I would make the names mirror the API... because the Operator setting >>> them knows the API, not the codeIgnore the crazy names in Nova, I certainly >>> hate them >>> >>> Big +1 on consistent naming which will help operator as well as >>> developer to maintain those. >>> >>> > >>> > Lance Bragstad wrote: >>> > > I'm curious if anyone has context on the "os-" part of the format? >>> > >>> > My memory of the Nova policy mess...* Nova's policy rules >>> traditionally followed the patterns of the code >>> > ** Yes, horrible, but it happened.* The code used to have the >>> OpenStack API and the EC2 API, hence the "os"* API used to expand with >>> extensions, so the policy name is often based on extensions** note most of >>> the extension code has now gone, including lots of related policies* Policy >>> in code was focused on getting us to a place where we could rename policy** >>> Whoop whoop by the way, it feels like we are really close to something >>> sensible now! >>> > Lance Bragstad wrote: >>> > Thoughts on using create, list, update, and delete as opposed to >>> post, get, put, patch, and delete in the naming convention? >>> > I could go either way as I think about "list servers" in the API.But >>> my preference is for the URL stub and POST, GET, etc. >>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad >>> wrote:If we consider dropping "os", should we entertain dropping "api", >>> too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple >>> service types (e.g "compute" or "loadbalancer"). >>> > +1The API is known as "compute" in api-ref, so the policy should be >>> for "compute", etc. >>> >>> Agree on mapping the policy name with api-ref as much as possible. Other >>> than policy name having 'os-', we have 'os-' in resource name also in nova >>> API url like /os-agents, /os-aggregates etc (almost every resource except >>> servers , flavors). As we cannot get rid of those from API url, we need to >>> keep the same in policy naming too? or we can have policy name like >>> compute:agents:create/post but that mismatch from api-ref where agents >>> resource url is os-agents. >>> >> >> Good question. I think this depends on how the service does policy >> enforcement. >> >> I know we did something like this in keystone, which required policy >> names and method names to be the same: >> >> "identity:list_users": "..." >> >> Because the initial implementation of policy enforcement used a decorator >> like this: >> >> from keystone import controller >> >> @controller.protected >> def list_users(self): >> ... >> >> Having the policy name the same as the method name made it easier for the >> decorator implementation to resolve the policy needed to protect the API >> because it just looked at the name of the wrapped method. The advantage was >> that it was easy to implement new APIs because you only needed to add a >> policy, implement the method, and make sure you decorate the implementation. >> >> While this worked, we are moving away from it entirely. The decorator >> implementation was ridiculously complicated. Only a handful of keystone >> developers understood it. With the addition of system-scope, it would have >> only become more convoluted. It also enables a much more copy-paste pattern >> (e.g., so long as I wrap my method with this decorator implementation, >> things should work right?). Instead, we're calling enforcement within the >> controller implementation to ensure things are easier to understand. It >> requires developers to be cognizant of how different token types affect the >> resources within an API. That said, coupling the policy name to the method >> name is no longer a requirement for keystone. >> >> Hopefully, that helps explain why we needed them to match. >> >> >>> >>> Also we have action API (i know from nova not sure from other services) >>> like POST /servers/{server_id}/action {addSecurityGroup} and their current >>> policy name is all inconsistent. few have policy name including their >>> resource name like "os_compute_api:os-flavor-access:add_tenant_access", few >>> has 'action' in policy name like >>> "os_compute_api:os-admin-actions:reset_state" and few has direct action >>> name like "os_compute_api:os-console-output" >>> >> >> Since the actions API relies on the request body and uses a single HTTP >> method, does it make sense to have the HTTP method in the policy name? It >> feels redundant, and we might be able to establish a convention that's more >> meaningful for things like action APIs. It looks like cinder has a similar >> pattern [0]. >> >> [0] >> https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action >> >> >>> >>> May be we can make them consistent with >>> :: or any better opinion. >>> >>> > From: Lance Bragstad > The topic of having >>> consistent policy names has popped up a few times this week. >>> > >>> > I would love to have this nailed down before we go through all the >>> policy rules again. In my head I hope in Nova we can go through each policy >>> rule and do the following: >>> > * move to new consistent policy name, deprecate existing name* >>> hardcode scope check to project, system or user** (user, yes... keypairs, >>> yuck, but its how they work)** deprecate in rule scope checks, which are >>> largely bogus in Nova anyway* make read/write/admin distinction** therefore >>> adding the "noop" role, amount other things >>> >>> + policy granularity. >>> >>> It is good idea to make the policy improvement all together and for all >>> rules as you mentioned. But my worries is how much load it will be on >>> operator side to migrate all policy rules at same time? What will be the >>> deprecation period etc which i think we can discuss on proposed spec - >>> https://review.openstack.org/#/c/547850 >> >> >> Yeah, that's another valid concern. I know at least one operator has >> weighed in already. I'm curious if operators have specific input here. >> >> It ultimately depends on if they override existing policies or not. If a >> deployment doesn't have any overrides, it should be a relatively simple >> change for operators to consume. >> >> >>> >>> >>> -gmann >>> >>> > Thanks,John >>> __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Fri Sep 28 13:57:50 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 28 Sep 2018 06:57:50 -0700 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) In-Reply-To: <20180928120029.d4il2jxxavb5rd6h@yuggoth.org> References: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> <20180928120029.d4il2jxxavb5rd6h@yuggoth.org> Message-ID: On Fri, Sep 28, 2018 at 5:00 AM Jeremy Stanley wrote: > > If memory serves, the biggest challenge around that solution was > determining who approves such proposals since they still need > per-project specs for the project-specific details anyway. Perhaps > someone who has recently worked on a feature which required > coordination between several teams (but not a majority of teams like > our cycle goals process addresses) can comment on what worked for > them and what improvements they would make on the process they > followed. > -- This is definitely the biggest challenge, and I think it is one of those things that is going to be on case by case basis. In the case of neutron smartnic support with ironic, the spec is largely living in ironic-specs, but we are collaborating with neutron folks. They may have other specs that tie in, but that we don't necessarily need to be aware of. I also think the prior ironic/neutron integration work executed that way. My perception with nova has also largely been similar with ironic's specs driving some changes in the nova ironic virt driver because we were evolving ironic, as long as there is a blueprint or something tracking that piece of work so they have visibility. At some point, some spec has to get a green light or be pushed forward first. Beyond that, it is largely a tracking issue as long as there is consensus. From d.krol at samsung.com Fri Sep 28 14:05:15 2018 From: d.krol at samsung.com (Dariusz Krol) Date: Fri, 28 Sep 2018 16:05:15 +0200 Subject: [openstack-dev] [python3-first] support in stable branches In-Reply-To: References: <20180927144829eucas1p26786a6e62c869b8138066f8857dae267~YSSg9Kjph2467924679eucas1p2l@eucas1p2.samsung.com> Message-ID: <20180928140517eucas1p285022a4ce8a367efadd9cdf05916a917~YlWEluavl1478514785eucas1p29@eucas1p2.samsung.com> Hello, I'm specifically referring to branches mentioned in: https://github.com/openstack/goal-tools/blob/4125c31e74776a7dc6a15d2276ab51ff3e73cd16/goal_tools/python3_first/jobs.py#L54 I hope this helps. Best, Dariusz Krol On 09/27/2018 06:04 PM, Ben Nemec wrote: > > > On 9/27/18 10:36 AM, Doug Hellmann wrote: >> Dariusz Krol writes: >> >>> Hello Champions :) >>> >>> >>> I work on the Trove project and we are wondering if python3 should be >>> supported in previous releases as well? >>> >>> Actually this question was asked by Alan Pevec from the stable branch >>> maintainers list. >>> >>> I saw you added releases up to ocata to support python3 and there are >>> already changes on gerrit waiting to be merged but after reading [1] I >>> have my doubts about this. >> >> I'm not sure what you're referring to when you say "added releases up to >> ocata" here. Can you link to the patches that you have questions about? > > Possibly the zuul migration patches for all the stable branches? If > so, those don't change the status of python 3 support on the stable > branches, they just split the zuul configuration to make it easier to > add new python 3 jobs on master without affecting the stable branches. > >> >>> Could you elaborate why it is necessary to support previous releases ? >>> >>> >>> Best, >>> >>> Dariusz Krol >>> >>> >>> [1] https://docs.openstack.org/project-team-guide/stable-branches.html >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> From mtreinish at kortar.org Fri Sep 28 14:10:06 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Fri, 28 Sep 2018 10:10:06 -0400 Subject: [openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv In-Reply-To: References: Message-ID: <20180928141006.GA16108@zeong> On Fri, Sep 28, 2018 at 02:39:24PM +0100, Chris Dent wrote: > > I'm still trying to figure out how to properly create a "modern" (as > in zuul v3 oriented) integration test for placement using gabbi and > tempest. That work is happening at https://review.openstack.org/#/c/601614/ > > There was lots of progress made after the last message on this > topic http://lists.openstack.org/pipermail/openstack-dev/2018-September/134837.html > but I've reached another interesting impasse. > > From devstack's standpoint, the way to say "I want to use a tempest > plugin" is to set TEMPEST_PLUGINS to alist of where the plugins are. > devstack:lib/tempest then does a: > > tox -evenv-tempest -- pip install -c $REQUIREMENTS_DIR/upper-constraints.txt $TEMPEST_PLUGINS > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_12_58_138163 > > I have this part working as expected. > > However, > > The advice is then to create a new job that has a parent of > devstack-tempest. That zuul job runs a variety of tox environments, > depending on the setting of the `tox_envlist` var. If you wish to > use a `tempest_test_regex` (I do) the preferred tox environment is > 'all'. > > That venv doesn't have the plugin installed, thus no gabbi tests are > found: > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683 Right above this line it shows that the gabbi-tempest plugin is installed in the venv: http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_650661 at version 0.1.1. It's a bit weird because it's line wrapped in my browser. The devstack logs also shows the plugin: http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/controller/logs/devstacklog.txt.gz#_2018-09-28_11_13_13_076 All the tempest tox jobs that run tempest (and the tempest-venv command used by devstack) run inside the same tox venv: https://github.com/openstack/tempest/blob/master/tox.ini#L52 My guess is that the plugin isn't returning any tests that match the regex. I'm also a bit alarmed that tempest run is returning 0 there when no tests are being run. That's definitely a bug because things should fail with no tests being successfully run. -Matt Treinish > > How do I get my plugin installed into the right venv while still > following the guidelines for good zuul behavior? > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From juliaashleykreger at gmail.com Fri Sep 28 14:13:23 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 28 Sep 2018 07:13:23 -0700 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: Message-ID: Eric, Very well said, I completely agree with you. We should not hold ourselves back based upon perceptions of original intended purpose. Things do change. We have to accept that. We must normalize this fact in our actions moving forward. That being said, I'm not entirely sure I'm personally fully aware of the arbitrary restrictions you speak of. Is there thread or a discussion out there that I can gain further context with? Thanks! -Julia On Fri, Sep 28, 2018 at 6:25 AM Eric Fried wrote: > > It's time somebody said this. > > Every time we turn a corner or look under a rug, we find another use > case for provider traits in placement. But every time we have to have > the argument about whether that use case satisfies the original > "intended purpose" of traits. > > That's only reason I've ever been able to glean: that it (whatever "it" > is) wasn't what the architects had in mind when they came up with the > idea of traits. We're not even talking about anything that would require > changes to the placement API. Just, "Oh, that's not a *capability* - > shut it down." > > Bubble wrap was originally intended as a textured wallpaper and a > greenhouse insulator. Can we accept the fact that traits have (many, > many) uses beyond marking capabilities, and quit with the arbitrary > restrictions? > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From colleen at gazlene.net Fri Sep 28 14:29:32 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 28 Sep 2018 16:29:32 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 24 September 2018 Message-ID: <1538144972.3848364.1523939360.5179EC0E@webmail.messagingengine.com> # Keystone Team Update - Week of 24 September 2018 ## News A theme this week was enhancing keystone's federation implementation to better support Edge use cases. We talked about it some on IRC[1] and the mailing list[2]. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-09-25.log.html#t2018-09-25T16:37:42 [2] http://lists.openstack.org/pipermail/openstack-dev/2018-September/135072.html ## Open Specs Search query: https://bit.ly/2Pi6dGj In addition to the Stein specs mentioned last week, Adam has been working on an untargeted spec for federation enhancements[3]. [3] https://review.openstack.org/313604 ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 15 changes this week, including lots of bugfixes and improvements to our Zuul config. ## Changes that need Attention Search query: https://bit.ly/2PUk84S There are 54 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs This week we opened 7 new bugs and closed 7. Bugs opened (7)  Bug #1794376 (keystone:High) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1794376 Bug #1794552 (keystone:High) opened by Adam Young https://bugs.launchpad.net/keystone/+bug/1794552 Bug #1794864 (keystone:Medium) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1794864  Bug #1794527 (keystone:Wishlist) opened by Adam Young https://bugs.launchpad.net/keystone/+bug/1794527  Bug #1794112 (keystone:Undecided) opened by fuckubuntu1 https://bugs.launchpad.net/keystone/+bug/1794112  Bug #1794726 (keystone:Undecided) opened by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1794726  Bug #1794179 (keystonemiddleware:Undecided) opened by Tim Burke https://bugs.launchpad.net/keystonemiddleware/+bug/1794179  Bugs closed (3)  Bug #1794112 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1794112  Bug #973681 (keystonemiddleware:Medium) https://bugs.launchpad.net/keystonemiddleware/+bug/973681  Bug #1473042 (keystonemiddleware:Wishlist) https://bugs.launchpad.net/keystonemiddleware/+bug/1473042  Bugs fixed (4)  Bug #1750843 (keystone:Low) fixed by Matthew Thode https://bugs.launchpad.net/keystone/+bug/1750843  Bug #1768980 (keystone:Low) fixed by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1768980  Bug #1473292 (keystone:Wishlist) fixed by Vishakha Agarwal https://bugs.launchpad.net/keystone/+bug/1473292  Bug #1275962 (keystonemiddleware:Wishlist) fixed by no one https://bugs.launchpad.net/keystonemiddleware/+bug/127596 ## Milestone Outlook https://releases.openstack.org/stein/schedule.html Spec proposal freeze deadline is a month, if you would like to see a feature in keystone in Stein please propose it now so it can get feedback before the spec freeze deadline. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From cdent+os at anticdent.org Fri Sep 28 14:31:10 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 28 Sep 2018 15:31:10 +0100 (BST) Subject: [openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv In-Reply-To: <20180928141006.GA16108@zeong> References: <20180928141006.GA16108@zeong> Message-ID: On Fri, 28 Sep 2018, Matthew Treinish wrote: >> http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683 > > Right above this line it shows that the gabbi-tempest plugin is installed in > the venv: > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_650661 Ah, so it is, thanks. My grepping and visual-grepping failed because of the weird linebreaks. Le sigh. For curiosity: What's the processing that is making it be installed twice? I ask because I'm hoping to (eventually) trim this to as small and light as possible. And then even more eventually I hope to make it so that if a project chooses the right job and has a gabbits directory, they'll get run. The part that was confusing for me was that the virtual env that lib/tempest (from devstack) uses is not even mentioned in tempest's tox.ini, so is using its own directory as far as I could tell. > My guess is that the plugin isn't returning any tests that match the regex. I'm going to run it without a regex and see what it produces. It might be that pre job I'm using to try to get the gabbits in the right place is not working as desired. A few patchsets ago when I was using the oogly way of doing things it was all working. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sean.mcginnis at gmx.com Fri Sep 28 14:33:15 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 28 Sep 2018 09:33:15 -0500 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: <20180928143314.GA18667@sm-workstation> > On Fri, Sep 28, 2018 at 8:48 AM Lance Bragstad wrote: > > > Bumping this thread again and proposing two conventions based on the > > discussion here. I propose we decide on one of the two following > > conventions: > > > > *::* > > > > or > > > > *:_* > > > > Where is the corresponding service type of the project [0], > > and is either create, get, list, update, or delete. I think > > decoupling the method from the policy name should aid in consistency, > > regardless of the underlying implementation. The HTTP method specifics can > > still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > > > > I think the plurality of the resource should default to what makes sense > > for the operation being carried out (e.g., list:foobars, create:foobar). > > > > I don't mind the first one because it's clear about what the delimiter is > > and it doesn't look weird when projects have something like: > > > > ::: > > My initial preference was the second format, but you make a good point here about potential subactions. Either is fine with me - the main thing I would love to see is consistency in format. But based on this part, I vote for option 2. > > If folks are ok with this, I can start working on some documentation that > > explains the motivation for this. Afterward, we can figure out how we want > > to track this work. > > +1 thanks for working on this! From balazs.gibizer at ericsson.com Fri Sep 28 14:41:58 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 28 Sep 2018 16:41:58 +0200 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: Message-ID: <1538145718.22269.0@smtp.office365.com> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried wrote: > It's time somebody said this. > > Every time we turn a corner or look under a rug, we find another use > case for provider traits in placement. But every time we have to have > the argument about whether that use case satisfies the original > "intended purpose" of traits. > > That's only reason I've ever been able to glean: that it (whatever > "it" > is) wasn't what the architects had in mind when they came up with the > idea of traits. We're not even talking about anything that would > require > changes to the placement API. Just, "Oh, that's not a *capability* - > shut it down." > > Bubble wrap was originally intended as a textured wallpaper and a > greenhouse insulator. Can we accept the fact that traits have (many, > many) uses beyond marking capabilities, and quit with the arbitrary > restrictions? How far are we willing to go? Does an arbitrary (key: value) pair encoded in a trait name like key_`str(value)` (e.g. CURRENT_TEMPERATURE: 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in placement? Cheers, gibi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mtreinish at kortar.org Fri Sep 28 15:02:07 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Fri, 28 Sep 2018 11:02:07 -0400 Subject: [openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv In-Reply-To: References: <20180928141006.GA16108@zeong> Message-ID: <20180928150207.GB16108@zeong> On Fri, Sep 28, 2018 at 03:31:10PM +0100, Chris Dent wrote: > On Fri, 28 Sep 2018, Matthew Treinish wrote: > > > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683 > > > > Right above this line it shows that the gabbi-tempest plugin is installed in > > the venv: > > > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_650661 > > Ah, so it is, thanks. My grepping and visual-grepping failed > because of the weird linebreaks. Le sigh. > > For curiosity: What's the processing that is making it be installed > twice? I ask because I'm hoping to (eventually) trim this to as > small and light as possible. And then even more eventually I hope to > make it so that if a project chooses the right job and has a gabbits > directory, they'll get run. The plugin should only be installed once. From the logs here is the only place the plugin is being installed in the venv: http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_01_027151 The rest of the references are just tox printing out the packages installed in the venv before running a command. > > The part that was confusing for me was that the virtual env that > lib/tempest (from devstack) uses is not even mentioned in tempest's > tox.ini, so is using its own directory as far as I could tell. It should be, devstack should be using the venv-tempest tox job to do venv prep (like installling the plugins) and run commands (like running tempest list-plugins for the log). This tox env is defined here: https://github.com/openstack/tempest/blob/master/tox.ini#L157-L162 It's sort of a hack, devstack is just using tox as venv manager for setting up tempest. But, then we use tox in the runner (what used to be devstack-gate) so this made sense. -Matt Treinish > > > My guess is that the plugin isn't returning any tests that match the regex. > > I'm going to run it without a regex and see what it produces. > > It might be that pre job I'm using to try to get the gabbits in the > right place is not working as desired. > > A few patchsets ago when I was using the oogly way of doing things > it was all working. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zbitter at redhat.com Fri Sep 28 15:02:24 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 28 Sep 2018 11:02:24 -0400 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: Message-ID: <3cd2fd8e-a9e4-7449-aab8-f28811e0ad46@redhat.com> On 28/09/18 9:25 AM, Eric Fried wrote: > It's time somebody said this. > > Every time we turn a corner or look under a rug, we find another use > case for provider traits in placement. But every time we have to have > the argument about whether that use case satisfies the original > "intended purpose" of traits. > > That's only reason I've ever been able to glean: that it (whatever "it" > is) wasn't what the architects had in mind when they came up with the > idea of traits. We're not even talking about anything that would require > changes to the placement API. Just, "Oh, that's not a *capability* - > shut it down." So I have no idea what traits or capabilities are (in this context), but I have a bit of experience with running a busy project where everyone wants to get their pet feature in, so I'd like to offer a couple of observations if I may: * Conceptual integrity *is* important. * 'Everything we could think of before we had a chance to try it' is not an especially compelling concept, and using it in place of one will tend to result in a lot of repeated arguments. Both extremes ('that's how we've always done it' vs. 'free-for-all') are probably undesirable. I'd recommend trying to document traits in conceptual, rather than historical, terms. What are they good at? What are they not good at? Is there a limit to how many there can be while still remaining manageable? Are there other potential concepts that would map better to certain borderline use cases? That won't make the arguments go away, but it should help make them easier to resolve. cheers, Zane. From aschultz at redhat.com Fri Sep 28 15:02:26 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 28 Sep 2018 09:02:26 -0600 Subject: [openstack-dev] [tripleo][puppet] clearing the gate and landing patches to help CI Message-ID: Hey Folks, Currently the tripleo gate is at 21 hours and we're continue to have timeouts and now scenario001/004 (in queens/pike) appear to be broken. Additionally we've got some patches in puppet-openstack that we need to land in order to resolve broken puppet unit tests which is affecting both projects. Currently we need to wait for the following to land in puppet: https://review.openstack.org/#/q/I4875b8bc8b2333046fc3a08b4669774fd26c89cb https://review.openstack.org/#/c/605350/ In tripleo we currently have not identified the root cause for any of the timeout failures so I'd for us to work on that before trying to land anything else because the gate resets are killing us and not helping anything. We have landed a few patches that have improved the situation but we're still hitting issues. https://bugs.launchpad.net/tripleo/+bug/1795009 is the bug for the scenario001/004 issues. It appears that we're ending up with a newer version of ansible on the system then what the packages provide. Still working on figuring out where it's coming from. Please do not approve anything or recheck unless it's to address CI issues at this time. Thanks, -Alex From openstack at nemebean.com Fri Sep 28 15:08:48 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 28 Sep 2018 10:08:48 -0500 Subject: [openstack-dev] [oslo][castellan] Time for a 1.0 release? In-Reply-To: <408527c4-9db2-92ba-9a05-c682bd3a2caf@openstack.org> References: <8bab2939-ae16-31f3-8191-2cb1e81bc9df@nemebean.com> <1538075598.6608.136.camel@redhat.com> <408527c4-9db2-92ba-9a05-c682bd3a2caf@openstack.org> Message-ID: On 9/28/18 3:59 AM, Thierry Carrez wrote: > Ade Lee wrote: >> On Tue, 2018-09-25 at 16:30 -0500, Ben Nemec wrote: >>> Doug pointed out on a recent Oslo release review that castellan is >>> still >>> not officially 1.0. Given the age of the project and the fact that >>> we're >>> asking people to deploy a Castellan-compatible keystore as one of >>> the >>> base services, it's probably time to address that. >>> >>> To that end, I'm sending this to see if anyone is aware of any >>> reasons >>> we shouldn't go ahead and tag a 1.0 of Castellan. >>> >> >> + 1 > > +1 > Propose it and we can continue the discussion on the review :) > Done: https://review.openstack.org/606108 From doug at doughellmann.com Fri Sep 28 15:28:37 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 28 Sep 2018 11:28:37 -0400 Subject: [openstack-dev] [all][tc][elections] Stein TC Election Results In-Reply-To: <20180927235653.GA18250@shipstone.jp> References: <20180927235653.GA18250@shipstone.jp> Message-ID: Emmet Hikory writes: > Please join me in congratulating the 6 newly elected members of the > Technical Committee (TC): > > - Doug Hellmann (dhellmann) > - Julia Kreger (TheJulia) > - Jeremy Stanley (fungi) > - Jean-Philippe Evrard (evrardjp) > - Lance Bragstad (lbragstad) > - Ghanshyam Mann (gmann) Congratulations, everyone! I'm looking forward to serving with all of you for another term. > Full Results: > https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f773fda2d0695864 > > Election process details and results are also available here: > https://governance.openstack.org/election/ > > Thank you to all of the candidates, having a good group of candidates helps > engage the community in our democratic process. > > Thank you to all who voted and who encouraged others to vote. Voter turnout > was significantly up from recent cycles. We need to ensure your voices are > heard. It's particularly good to hear that turnout is up, not just in percentage but in raw numbers, too. Thank you all for voting! Doug From doug at doughellmann.com Fri Sep 28 15:44:51 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 28 Sep 2018 11:44:51 -0400 Subject: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed In-Reply-To: References: Message-ID: zuul at openstack.org writes: > Build failed. > > - release-openstack-python http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/ : FAILURE in 3m 57s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED The error here is ERROR: unknown environment 'venv' It looks like os-log-merger is not set up for the release-openstack-python job, which expects a specific tox setup. http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/ara-report/result/7c6fd37c-82d8-48f7-b653-5bdba90cbc31/ From doug at doughellmann.com Fri Sep 28 16:05:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 28 Sep 2018 12:05:29 -0400 Subject: [openstack-dev] [python3-first] support in stable branches In-Reply-To: <20180928140517eucas1p285022a4ce8a367efadd9cdf05916a917~YlWEluavl1478514785eucas1p29@eucas1p2.samsung.com> References: <20180927144829eucas1p26786a6e62c869b8138066f8857dae267~YSSg9Kjph2467924679eucas1p2l@eucas1p2.samsung.com> <20180928140517eucas1p285022a4ce8a367efadd9cdf05916a917~YlWEluavl1478514785eucas1p29@eucas1p2.samsung.com> Message-ID: Dariusz Krol writes: > Hello, > > > I'm specifically referring to branches mentioned in: > https://github.com/openstack/goal-tools/blob/4125c31e74776a7dc6a15d2276ab51ff3e73cd16/goal_tools/python3_first/jobs.py#L54 I'm still not entirely sure what you're saying is happening that you do not expect to have happening, but I'll take a guess. The zuul migration portion of the goal work needs to move *all* of the Zuul settings for a repo into the correct branch because after the migration the job settings will no longer be in project-config at all and so zuul won't know which jobs to run on the stable branches if we haven't imported the settings. The migration script tries to figure out which jobs apply to which branches of each repo by looking at the branch specifier settings in project-config, and then it creates an import patch for each branch with the relevant jobs. Subsequent steps in the script change the documentation and release notes jobs and then add new python 3.6 testing jobs. Those steps only apply to the master branch. So, if you have a patch importing a python 3 job setting to a stable branch of a repo where you aren't expecting it (and it isn't supported), that's most likely because project-config has no branch specifiers for the job (meaning it should run on all branches). We did find several cases where that was true because projects added jobs without branch specifiers after the branches were created, and then back-ported no patches to the stable branch. See http://lists.openstack.org/pipermail/openstack-dev/2018-August/133594.html for details. Doug > I hope this helps. > > > Best, > > Dariusz Krol > > > On 09/27/2018 06:04 PM, Ben Nemec wrote: >> >> >> On 9/27/18 10:36 AM, Doug Hellmann wrote: >>> Dariusz Krol writes: >>> >>>> Hello Champions :) >>>> >>>> >>>> I work on the Trove project and we are wondering if python3 should be >>>> supported in previous releases as well? >>>> >>>> Actually this question was asked by Alan Pevec from the stable branch >>>> maintainers list. >>>> >>>> I saw you added releases up to ocata to support python3 and there are >>>> already changes on gerrit waiting to be merged but after reading [1] I >>>> have my doubts about this. >>> >>> I'm not sure what you're referring to when you say "added releases up to >>> ocata" here. Can you link to the patches that you have questions about? >> >> Possibly the zuul migration patches for all the stable branches? If >> so, those don't change the status of python 3 support on the stable >> branches, they just split the zuul configuration to make it easier to >> add new python 3 jobs on master without affecting the stable branches. >> >>> >>>> Could you elaborate why it is necessary to support previous releases ? >>>> >>>> >>>> Best, >>>> >>>> Dariusz Krol >>>> >>>> >>>> [1] https://docs.openstack.org/project-team-guide/stable-branches.html >>>> __________________________________________________________________________ >>>> >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> From cdent+os at anticdent.org Fri Sep 28 16:06:34 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 28 Sep 2018 17:06:34 +0100 (BST) Subject: [openstack-dev] [placement] update 18-39 Message-ID: HTML: https://anticdent.org/placement-update-18-39.html Welcome to a placement update. This week is mostly focused on specs and illuminating some of the pressing issues with extraction. # Most Important Last week's important tasks remain important: * Work on specs and setting [priorities](https://etherpad.openstack.org/p/nova-ptg-stein-priorities). * Working towards upgrade tests (see more on that in the extraction section below). # What's Changed Tetsuro is a core reviewer in placement now. Yay! Welcome. Mel produced a [summary of the PTG](http://lists.openstack.org/pipermail/openstack-dev/2018-September/135122.html) with some good links and plans. # Questions and Links No answer to last week's question that I can recall, so here it is again: * [Last week], belmoreira showed up in [#openstack-placement](http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2018-09-20.log.html#t2018-09-20T14:11:59) with some issues with expected resource providers not showing up in allocation candidates. This was traced back to `max_unit` for `VCPU` being locked at == `total` and hardware which had had SMT turned off now reporting fewer CPUs, thus being unable to accept existing large flavors. Discussion ensued about ways to potentially make `max_unit` more manageable by operators. The existing constraint is there for a reason (discussed in IRC) but that reason is not universally agreed. There are two issues with this: The "reason" is not universally agreed and we didn't resolve that. Also, management of `max_unit` of any inventory gets more complicated in a world of complex NUMA topologies. Eric has raised a question about the [intended purpose of traits](http://lists.openstack.org/pipermail/openstack-dev/2018-September/135209.html). # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 18. +1. * [In progress placement bugs](https://goo.gl/vzGGDQ) 9. -1. # Specs * Account for host agg allocation ratio in placement (Still in rocky/) * Add subtree filter for GET /resource_providers * Resource provider - request group mapping in allocation candidate * VMware: place instances on resource pool (still in rocky/) * Standardize CPU resource tracking * Allow overcommit of dedicated CPU (Has an alternative which changes allocations to a float) * List resource providers having inventory * Bi-directional enforcement of traits * allow transferring ownership of instance * Modelling passthrough devices for report to placement * Propose counting quota usage from placement and API database (A bit out of date but may be worth resurrecting) * Spec: allocation candidates in tree * [WIP] generic device discovery policy * Nova Cyborg interaction specification. * supporting virtual NVDIMM devices * Spec: Support filtering by forbidden aggregate * Proposes NUMA topology with RPs * Support initial allocation ratios * Count quota based on resource class # Main Themes ## Making Nested Useful Work on getting nova's use of nested resource providers happy and fixing bugs discovered in placement in the process. * * ## Consumer Generations gibi is still working hard to drive home support for consumer generations on the nova side. Because of some dependency management that stuff is currently in the following topic: * ## Extraction There are few large-ish things in progress with the extraction process which need some broader attention: * Matt is working on a [patch to grenade](https://review.openstack.org/604454) to deal with upgrading, with a migration of data. * We have work in progress to tune up the documentation but we are not yet publishing documentation. We need to work out a plan for this. Presumably we don't want to be publishing docs until we are publishing code, but the interdependencies need to be teased out. * We need to decide how we are going to manage database schema migrations (alembic is the modern way) and we need to create the tooling for running those migrations (as well as upgrade checks). This includes deciding how we want to manage command line tools (using nova's example or something else). Until those things happen we don't have a "thing" which people can install and run, unless they do some extra hacking around which we don't want to impose upon people any longer than necessary. # Other As with last time, I'm not going to make a list of links to pending changes that aren't already listed above. I'll start doing that again eventually (once priorities are more clear), but for now it is useful to look at [open placement patches](https://review.openstack.org/#/q/project:openstack/placement+status:open) and patches from everywhere which [mention placement in the commit message](https://review.openstack.org/#/q/message:placement+status:open). # End Taking a few days off is a great way to get out of sync. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jaypipes at gmail.com Fri Sep 28 16:12:47 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 28 Sep 2018 12:12:47 -0400 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: Message-ID: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> On 09/28/2018 09:25 AM, Eric Fried wrote: > It's time somebody said this. > > Every time we turn a corner or look under a rug, we find another use > case for provider traits in placement. But every time we have to have > the argument about whether that use case satisfies the original > "intended purpose" of traits. > > That's only reason I've ever been able to glean: that it (whatever "it" > is) wasn't what the architects had in mind when they came up with the > idea of traits. Don't pussyfoot around things. It's me you're talking about, Eric. You could just ask me instead of passive-aggressively posting to the list like this. > We're not even talking about anything that would require changes to > the placement API. Just, "Oh, that's not a *capability* - shut it > down." That's precisely the attitude that got the Nova scheduler into the unmaintainable and convoluted mess that it is now: "well, who cares if a concept was originally intended to describe X, it's just *easier* for us to re-use this random piece of data in ways it wasn't intended because that way we don't have to change anything about our docs or our API". And *this* is the kind of stuff you end up with: https://github.com/openstack/nova/blob/99bf62e42701397690fe2b4987ce4fd7879355b8/nova/scheduler/filters/compute_capabilities_filter.py#L35-L107 Which is a pile of unreadable, unintelligible garbage; nobody knows how it works, how it originally was intended to work, or how to really clean it up. > Bubble wrap was originally intended as a textured wallpaper and a > greenhouse insulator. Can we accept the fact that traits have (many, > many) uses beyond marking capabilities, and quit with the arbitrary > restrictions? They aren't arbitrary. They are there for a reason: a trait is a boolean capability. It describes something that either a provider is capable of supporting or it isn't. Conceptually, having boolean traits/capabilities is important because it allows the user to reason simply about how a provider meets the requested constraints for scheduling. Currently, those constraints include the following: * Does the provider have *capacity* for the requested resources? * Does the provider have the required (or forbidden) *capabilities*? * Does the provider belong to some group? If we want to add further constraints to the placement allocation candidates request that ask things like: * Does the provider have version 1.22.61821 of BIOS firmware from Marvell installed on it? * Does the provider support an FPGA that has had an OVS program flashed to it in the last 20 days? * Does the provider belong to physical network "corpnet" and also support creation of virtual NICs of type either "DIRECT" or "NORMAL"? Then we should add a data model that allow providers to be decorated with key/value (or more complex than key/value) information where we can query for those kinds of constraints without needing to encode all sorts of non-binary bits of information into a capability string. Propose such a thing and I'll gladly support it. But I won't support bastardizing the simple concept of a boolean capability just because we don't want to change the API or database schema. -jay From mriedemos at gmail.com Fri Sep 28 16:21:27 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 28 Sep 2018 11:21:27 -0500 Subject: [openstack-dev] [nova][stable] Preparing for ocata-em (extended maintenance) Message-ID: Per the other thread on this [1] I've created an etherpad [2] to track what needs to happen to get nova's stable/ocata branch ready for Extended Maintenance [3] which means we need to flush our existing Ocata backports that we want in the final Ocata release before tagging the branch as ocata-em, after which point we won't do releases from that branch anymore. The etherpad lists each open ocata backport along with any of its related backports on newer branches like pike/queens/etc. Since we need the backports to go in order, we need to review and merge the changes on the newer branches first. With the state of the gate lately, we really can't sit on our hands here because it will probably take up to a week just to merge all of the changes for each branch. Once the Ocata backports are flushed through, we'll cut the final release and tag the branch as being in extended maintenance. Do we want to coordinate a review day next week for the nova-stable-maint core team, like Tuesday, or just trust that you all know who you are and will help out as necessary in getting these reviews done? Non-stable cores are also welcome to help review here to make sure we're not missing something, which is also a good way to get noticed as caring about stable branches and eventually get you on the stable maint core team. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/thread.html#134810 [2] https://etherpad.openstack.org/p/nova-ocata-em [3] https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance -- Thanks, Matt From Arkady.Kanevsky at dell.com Fri Sep 28 16:48:53 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Fri, 28 Sep 2018 16:48:53 +0000 Subject: [openstack-dev] [all][tc][elections] Stein TC Election Results In-Reply-To: References: <20180927235653.GA18250@shipstone.jp> Message-ID: <27c0b69e3836479a90ae52c0c1033a8f@AUSX13MPS308.AMER.DELL.COM> Congrats to newly elected TCs and all people who run. -----Original Message----- From: Doug Hellmann Sent: Friday, September 28, 2018 10:29 AM To: Emmet Hikory; OpenStack Developers Subject: Re: [openstack-dev] [all][tc][elections] Stein TC Election Results [EXTERNAL EMAIL] Please report any suspicious attachments, links, or requests for sensitive information. Emmet Hikory writes: > Please join me in congratulating the 6 newly elected members of the > Technical Committee (TC): > > - Doug Hellmann (dhellmann) > - Julia Kreger (TheJulia) > - Jeremy Stanley (fungi) > - Jean-Philippe Evrard (evrardjp) > - Lance Bragstad (lbragstad) > - Ghanshyam Mann (gmann) Congratulations, everyone! I'm looking forward to serving with all of you for another term. > Full Results: > https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f773fda2d0695864 > > Election process details and results are also available here: > https://governance.openstack.org/election/ > > Thank you to all of the candidates, having a good group of candidates helps > engage the community in our democratic process. > > Thank you to all who voted and who encouraged others to vote. Voter turnout > was significantly up from recent cycles. We need to ensure your voices are > heard. It's particularly good to hear that turnout is up, not just in percentage but in raw numbers, too. Thank you all for voting! Doug __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Fri Sep 28 17:19:31 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 28 Sep 2018 18:19:31 +0100 (BST) Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> Message-ID: On Fri, 28 Sep 2018, Jay Pipes wrote: > On 09/28/2018 09:25 AM, Eric Fried wrote: >> It's time somebody said this. Yes, a useful topic, I think. >> Every time we turn a corner or look under a rug, we find another use >> case for provider traits in placement. But every time we have to have >> the argument about whether that use case satisfies the original >> "intended purpose" of traits. >> >> That's only reason I've ever been able to glean: that it (whatever "it" >> is) wasn't what the architects had in mind when they came up with the >> idea of traits. > > Don't pussyfoot around things. It's me you're talking about, Eric. You could > just ask me instead of passive-aggressively posting to the list like this. It's not just you. Ed and I have also expressed some fairly strong statement about how traits are "supposed" to be used and I would guess that from Eric's perspective all three of us (amongst others) have some form of architectural influence. Since it takes a village and all that. > They aren't arbitrary. They are there for a reason: a trait is a boolean > capability. It describes something that either a provider is capable of > supporting or it isn't. This is somewhat (maybe even only slightly) different from what I think the definition of a trait is, and that nuance may be relevant. I describe a trait as a "quality that a resource provider has" (the car is blue). This contrasts with a resource class which is a "quantity that a resource provider has" (the car has 4 doors). Our implementation is pretty much exactly that ^. We allow clients to ask "give me things that have qualities x, y, z, not qualities a, b, c, and quanities of G of 5 and H of 7". Add in aggregates and we have exactly what you say: > * Does the provider have *capacity* for the requested resources? > * Does the provider have the required (or forbidden) *capabilities*? > * Does the provider belong to some group? The nuance of difference is that your description of *capabilities* seems more narrow than my description of *qualities* (aka characteristics). You've got something fairly specific in mind, as a way of constraining the profusion of noise that has happened with how various kinds of information about resources of all sorts is managed in OpenStack, as you describe in your message. I do not think it should be placement's job to control that noise. It should be placement's job to provide a very strict contract about what you can do with a trait: * create it, if necessary * assign it to one or more resource providers * ask for providers that either have it * ... or do not have it That's all. Placement _code_ should _never_ be aware of the value of a trait (except for the magical MISC_SHARES...). It should never become possible to regex on traits or do comparisons (required= If we want to add further constraints to the placement allocation candidates > request that ask things like: > > * Does the provider have version 1.22.61821 of BIOS firmware from Marvell > installed on it? That's a quality of the provider in a moment. > * Does the provider support an FPGA that has had an OVS program flashed to it > in the last 20 days? If you squint, so is this. > * Does the provider belong to physical network "corpnet" and also support > creation of virtual NICs of type either "DIRECT" or "NORMAL"? And these. But at least some of them are dynamic rather than some kind of platonic ideal associated with the resource provider. I don't think placement should be concerned about temporal aspects of traits. If we can't write a web service that can handle setting lots of traits every second of every day, we should go home. If clients of placement want to set weird traits, more power to them. However, if clients of placement (such as nova) which are being the orchestrator of resource providers manipulated by multiple systems (neutron, cinder, ironic, cyborg, etc) wish to set some constraints on how and what traits can do and mean, then that is up to them. nova-scheduler is the thing that is doing `GET /allocation_candidates` for those multiple system. It presumably should have some say in what traits it is willing to express and use. But the placement service doesn't and shouldn't care. > Then we should add a data model that allow providers to be decorated with > key/value (or more complex than key/value) information where we can query for > those kinds of constraints without needing to encode all sorts of non-binary > bits of information into a capability string. Let's never do this, please. The three capabilities (ha!) of placement that you listed above ("Does the...") are very powerful as is and have a conceptual integrity that's really quite awesome. I think keeping it contained and constrained in very "simple" concepts like that was stroke of genius you (Jay) made and I'd hope we can keep it clean like that. If we weren't a multiple-service oriented system, and instead had some kind of k8s-like etcd-like keeper-of-all-the-info-about-everything, then sure, having what we currently model as resource providers be a giant blob of metadata (with quantities, qualitiies, and key-values) that is an authority for the entire system might make some kind of sense. But we don't. If we wanted to migrate to having something like that, using placement as the trojan horse for such a change, either with intent or by accident, would be unfortunate. > Propose such a thing and I'll gladly support it. But I won't support > bastardizing the simple concept of a boolean capability just because we don't > want to change the API or database schema. For me, it is not a matter of not wanting to change the API or the database schema. It's about not wanting to expand the concepts, and thus the purpose, of the system. It's about wanting to keep focus and functionality narrow so we can have a target which is "maturity" and know when we're there. My summary: Traits are symbols that are 255 characters long that are associated with a resource provider. It's possible to query for resource providers that have or do not have a specific trait. This has the effect of making the meaning of a trait a descriptor of the resource provider. What the descriptor signifies is up to the thing creating and using the resource provider, not placement. We need to harden that contract and stick to it. Placement is like a common carrier, it doesn't care what's in the box. /me cues brad pitt -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mriedemos at gmail.com Fri Sep 28 17:56:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 28 Sep 2018 12:56:41 -0500 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: <0cac451b-6519-f0de-acb7-0703560b1f4d@gmail.com> Message-ID: On 9/21/2018 9:08 AM, Elõd Illés wrote: > Hi, > > Here is an etherpad with the teams that have stable:follow-policy tag on > their repos: > > https://etherpad.openstack.org/p/ocata-final-release-before-em > > On the links you can find reports about the open and unreleased changes, > that could be a useful input for the before-EM/final release. > Please have a look at the report (and review the open patches if there > are) so that a release can be made if necessary. > > Thanks, > > Előd I've added nova's ocata-em tracking etherpad to the list. https://etherpad.openstack.org/p/nova-ocata-em -- Thanks, Matt From morgan.fainberg at gmail.com Fri Sep 28 17:57:05 2018 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Fri, 28 Sep 2018 10:57:05 -0700 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: Ideally I would like to see it in the form of least specific to most specific. But more importantly in a way that there is no additional delimiters between the service type and the resource. Finally, I do not like the change of plurality depending on action type. I propose we consider *::[:]* Example for keystone (note, action names below are strictly examples I am fine with whatever form those actions take): *identity:projects:create* *identity:projects:delete* *identity:projects:list* *identity:projects:get* It keeps things simple and consistent when you're looking through overrides / defaults. --Morgan On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad wrote: > Bumping this thread again and proposing two conventions based on the > discussion here. I propose we decide on one of the two following > conventions: > > *::* > > or > > *:_* > > Where is the corresponding service type of the project [0], > and is either create, get, list, update, or delete. I think > decoupling the method from the policy name should aid in consistency, > regardless of the underlying implementation. The HTTP method specifics can > still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > > I think the plurality of the resource should default to what makes sense > for the operation being carried out (e.g., list:foobars, create:foobar). > > I don't mind the first one because it's clear about what the delimiter is > and it doesn't look weird when projects have something like: > > ::: > > If folks are ok with this, I can start working on some documentation that > explains the motivation for this. Afterward, we can figure out how we want > to track this work. > > What color do you want the shed to be? > > [0] https://service-types.openstack.org/service-types.json > [1] > https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule > > On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad > wrote: > >> >> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann >> wrote: >> >>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt < >>> john at johngarbutt.com> wrote ---- >>> > tl;dr+1 consistent names >>> > I would make the names mirror the API... because the Operator setting >>> them knows the API, not the codeIgnore the crazy names in Nova, I certainly >>> hate them >>> >>> Big +1 on consistent naming which will help operator as well as >>> developer to maintain those. >>> >>> > >>> > Lance Bragstad wrote: >>> > > I'm curious if anyone has context on the "os-" part of the format? >>> > >>> > My memory of the Nova policy mess...* Nova's policy rules >>> traditionally followed the patterns of the code >>> > ** Yes, horrible, but it happened.* The code used to have the >>> OpenStack API and the EC2 API, hence the "os"* API used to expand with >>> extensions, so the policy name is often based on extensions** note most of >>> the extension code has now gone, including lots of related policies* Policy >>> in code was focused on getting us to a place where we could rename policy** >>> Whoop whoop by the way, it feels like we are really close to something >>> sensible now! >>> > Lance Bragstad wrote: >>> > Thoughts on using create, list, update, and delete as opposed to >>> post, get, put, patch, and delete in the naming convention? >>> > I could go either way as I think about "list servers" in the API.But >>> my preference is for the URL stub and POST, GET, etc. >>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad >>> wrote:If we consider dropping "os", should we entertain dropping "api", >>> too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple >>> service types (e.g "compute" or "loadbalancer"). >>> > +1The API is known as "compute" in api-ref, so the policy should be >>> for "compute", etc. >>> >>> Agree on mapping the policy name with api-ref as much as possible. Other >>> than policy name having 'os-', we have 'os-' in resource name also in nova >>> API url like /os-agents, /os-aggregates etc (almost every resource except >>> servers , flavors). As we cannot get rid of those from API url, we need to >>> keep the same in policy naming too? or we can have policy name like >>> compute:agents:create/post but that mismatch from api-ref where agents >>> resource url is os-agents. >>> >> >> Good question. I think this depends on how the service does policy >> enforcement. >> >> I know we did something like this in keystone, which required policy >> names and method names to be the same: >> >> "identity:list_users": "..." >> >> Because the initial implementation of policy enforcement used a decorator >> like this: >> >> from keystone import controller >> >> @controller.protected >> def list_users(self): >> ... >> >> Having the policy name the same as the method name made it easier for the >> decorator implementation to resolve the policy needed to protect the API >> because it just looked at the name of the wrapped method. The advantage was >> that it was easy to implement new APIs because you only needed to add a >> policy, implement the method, and make sure you decorate the implementation. >> >> While this worked, we are moving away from it entirely. The decorator >> implementation was ridiculously complicated. Only a handful of keystone >> developers understood it. With the addition of system-scope, it would have >> only become more convoluted. It also enables a much more copy-paste pattern >> (e.g., so long as I wrap my method with this decorator implementation, >> things should work right?). Instead, we're calling enforcement within the >> controller implementation to ensure things are easier to understand. It >> requires developers to be cognizant of how different token types affect the >> resources within an API. That said, coupling the policy name to the method >> name is no longer a requirement for keystone. >> >> Hopefully, that helps explain why we needed them to match. >> >> >>> >>> Also we have action API (i know from nova not sure from other services) >>> like POST /servers/{server_id}/action {addSecurityGroup} and their current >>> policy name is all inconsistent. few have policy name including their >>> resource name like "os_compute_api:os-flavor-access:add_tenant_access", few >>> has 'action' in policy name like >>> "os_compute_api:os-admin-actions:reset_state" and few has direct action >>> name like "os_compute_api:os-console-output" >>> >> >> Since the actions API relies on the request body and uses a single HTTP >> method, does it make sense to have the HTTP method in the policy name? It >> feels redundant, and we might be able to establish a convention that's more >> meaningful for things like action APIs. It looks like cinder has a similar >> pattern [0]. >> >> [0] >> https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action >> >> >>> >>> May be we can make them consistent with >>> :: or any better opinion. >>> >>> > From: Lance Bragstad > The topic of having >>> consistent policy names has popped up a few times this week. >>> > >>> > I would love to have this nailed down before we go through all the >>> policy rules again. In my head I hope in Nova we can go through each policy >>> rule and do the following: >>> > * move to new consistent policy name, deprecate existing name* >>> hardcode scope check to project, system or user** (user, yes... keypairs, >>> yuck, but its how they work)** deprecate in rule scope checks, which are >>> largely bogus in Nova anyway* make read/write/admin distinction** therefore >>> adding the "noop" role, amount other things >>> >>> + policy granularity. >>> >>> It is good idea to make the policy improvement all together and for all >>> rules as you mentioned. But my worries is how much load it will be on >>> operator side to migrate all policy rules at same time? What will be the >>> deprecation period etc which i think we can discuss on proposed spec - >>> https://review.openstack.org/#/c/547850 >> >> >> Yeah, that's another valid concern. I know at least one operator has >> weighed in already. I'm curious if operators have specific input here. >> >> It ultimately depends on if they override existing policies or not. If a >> deployment doesn't have any overrides, it should be a relatively simple >> change for operators to consume. >> >> >>> >>> >>> -gmann >>> >>> > Thanks,John >>> __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From edmondsw at us.ibm.com Fri Sep 28 17:58:52 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Fri, 28 Sep 2018 13:58:52 -0400 Subject: [openstack-dev] [goal][python3] week 7 update In-Reply-To: References: Message-ID: Doug Hellmann wrote on 09/26/2018 06:29:11 PM: > * We do not want to set the override once in testenv, because that > breaks the more specific versions used in default environments like > py35 and py36 (at least under older versions of tox). I assume that something like https://git.openstack.org/cgit/openstack/nova-powervm/commit/?id=fa64a93c965e6a6692711962ad6584534da81695 should be a perfectly acceptable alternative in at least some cases. Agreed? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Fri Sep 28 18:02:44 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 28 Sep 2018 13:02:44 -0500 Subject: [openstack-dev] [all][tc][elections] Stein TC Election Results In-Reply-To: <20180928001957.kaeqro62esqgihep@yuggoth.org> References: <20180927235653.GA18250@shipstone.jp> <20180928001957.kaeqro62esqgihep@yuggoth.org> Message-ID: ++ To what Jeremy said and congratulations. On 9/27/2018 7:19 PM, Jeremy Stanley wrote: > On 2018-09-27 20:00:42 -0400 (-0400), Mohammed Naser wrote: > [...] >> A big thank you to our election team who oversees all of this as >> well :) > [...] > > I wholeheartedly concur! > > And an even bigger thank you to the 5 candidates who were not > elected this term; please run again in the next election if you're > able, I think every one of you would have made a great choice for a > seat on the OpenStack TC. Our community is really lucky to have so > many qualified people eager to take on governance tasks. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From hrybacki at redhat.com Fri Sep 28 18:03:38 2018 From: hrybacki at redhat.com (Harry Rybacki) Date: Fri, 28 Sep 2018 14:03:38 -0400 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg wrote: > > Ideally I would like to see it in the form of least specific to most specific. But more importantly in a way that there is no additional delimiters between the service type and the resource. Finally, I do not like the change of plurality depending on action type. > > I propose we consider > > ::[:] > > Example for keystone (note, action names below are strictly examples I am fine with whatever form those actions take): > identity:projects:create > identity:projects:delete > identity:projects:list > identity:projects:get > > It keeps things simple and consistent when you're looking through overrides / defaults. > --Morgan +1 -- I think the ordering if `resource` comes before `action|subaction` will be more clean. /R Harry > > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad wrote: >> >> Bumping this thread again and proposing two conventions based on the discussion here. I propose we decide on one of the two following conventions: >> >> :: >> >> or >> >> :_ >> >> Where is the corresponding service type of the project [0], and is either create, get, list, update, or delete. I think decoupling the method from the policy name should aid in consistency, regardless of the underlying implementation. The HTTP method specifics can still be relayed using oslo.policy's DocumentedRuleDefault object [1]. >> >> I think the plurality of the resource should default to what makes sense for the operation being carried out (e.g., list:foobars, create:foobar). >> >> I don't mind the first one because it's clear about what the delimiter is and it doesn't look weird when projects have something like: >> >> ::: >> >> If folks are ok with this, I can start working on some documentation that explains the motivation for this. Afterward, we can figure out how we want to track this work. >> >> What color do you want the shed to be? >> >> [0] https://service-types.openstack.org/service-types.json >> [1] https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule >> >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad wrote: >>> >>> >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann wrote: >>>> >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt wrote ---- >>>> > tl;dr+1 consistent names >>>> > I would make the names mirror the API... because the Operator setting them knows the API, not the codeIgnore the crazy names in Nova, I certainly hate them >>>> >>>> Big +1 on consistent naming which will help operator as well as developer to maintain those. >>>> >>>> > >>>> > Lance Bragstad wrote: >>>> > > I'm curious if anyone has context on the "os-" part of the format? >>>> > >>>> > My memory of the Nova policy mess...* Nova's policy rules traditionally followed the patterns of the code >>>> > ** Yes, horrible, but it happened.* The code used to have the OpenStack API and the EC2 API, hence the "os"* API used to expand with extensions, so the policy name is often based on extensions** note most of the extension code has now gone, including lots of related policies* Policy in code was focused on getting us to a place where we could rename policy** Whoop whoop by the way, it feels like we are really close to something sensible now! >>>> > Lance Bragstad wrote: >>>> > Thoughts on using create, list, update, and delete as opposed to post, get, put, patch, and delete in the naming convention? >>>> > I could go either way as I think about "list servers" in the API.But my preference is for the URL stub and POST, GET, etc. >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad wrote:If we consider dropping "os", should we entertain dropping "api", too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple service types (e.g "compute" or "loadbalancer"). >>>> > +1The API is known as "compute" in api-ref, so the policy should be for "compute", etc. >>>> >>>> Agree on mapping the policy name with api-ref as much as possible. Other than policy name having 'os-', we have 'os-' in resource name also in nova API url like /os-agents, /os-aggregates etc (almost every resource except servers , flavors). As we cannot get rid of those from API url, we need to keep the same in policy naming too? or we can have policy name like compute:agents:create/post but that mismatch from api-ref where agents resource url is os-agents. >>> >>> >>> Good question. I think this depends on how the service does policy enforcement. >>> >>> I know we did something like this in keystone, which required policy names and method names to be the same: >>> >>> "identity:list_users": "..." >>> >>> Because the initial implementation of policy enforcement used a decorator like this: >>> >>> from keystone import controller >>> >>> @controller.protected >>> def list_users(self): >>> ... >>> >>> Having the policy name the same as the method name made it easier for the decorator implementation to resolve the policy needed to protect the API because it just looked at the name of the wrapped method. The advantage was that it was easy to implement new APIs because you only needed to add a policy, implement the method, and make sure you decorate the implementation. >>> >>> While this worked, we are moving away from it entirely. The decorator implementation was ridiculously complicated. Only a handful of keystone developers understood it. With the addition of system-scope, it would have only become more convoluted. It also enables a much more copy-paste pattern (e.g., so long as I wrap my method with this decorator implementation, things should work right?). Instead, we're calling enforcement within the controller implementation to ensure things are easier to understand. It requires developers to be cognizant of how different token types affect the resources within an API. That said, coupling the policy name to the method name is no longer a requirement for keystone. >>> >>> Hopefully, that helps explain why we needed them to match. >>> >>>> >>>> >>>> Also we have action API (i know from nova not sure from other services) like POST /servers/{server_id}/action {addSecurityGroup} and their current policy name is all inconsistent. few have policy name including their resource name like "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in policy name like "os_compute_api:os-admin-actions:reset_state" and few has direct action name like "os_compute_api:os-console-output" >>> >>> >>> Since the actions API relies on the request body and uses a single HTTP method, does it make sense to have the HTTP method in the policy name? It feels redundant, and we might be able to establish a convention that's more meaningful for things like action APIs. It looks like cinder has a similar pattern [0]. >>> >>> [0] https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action >>> >>>> >>>> >>>> May be we can make them consistent with :: or any better opinion. >>>> >>>> > From: Lance Bragstad > The topic of having consistent policy names has popped up a few times this week. >>>> > >>>> > I would love to have this nailed down before we go through all the policy rules again. In my head I hope in Nova we can go through each policy rule and do the following: >>>> > * move to new consistent policy name, deprecate existing name* hardcode scope check to project, system or user** (user, yes... keypairs, yuck, but its how they work)** deprecate in rule scope checks, which are largely bogus in Nova anyway* make read/write/admin distinction** therefore adding the "noop" role, amount other things >>>> >>>> + policy granularity. >>>> >>>> It is good idea to make the policy improvement all together and for all rules as you mentioned. But my worries is how much load it will be on operator side to migrate all policy rules at same time? What will be the deprecation period etc which i think we can discuss on proposed spec - https://review.openstack.org/#/c/547850 >>> >>> >>> Yeah, that's another valid concern. I know at least one operator has weighed in already. I'm curious if operators have specific input here. >>> >>> It ultimately depends on if they override existing policies or not. If a deployment doesn't have any overrides, it should be a relatively simple change for operators to consume. >>> >>>> >>>> >>>> >>>> -gmann >>>> >>>> > Thanks,John __________________________________________________________________________ >>>> > OpenStack Development Mailing List (not for usage questions) >>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> > >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Fri Sep 28 18:38:08 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 28 Sep 2018 18:38:08 +0000 Subject: [openstack-dev] [goal][python3] week 7 update In-Reply-To: References: Message-ID: <20180928183808.dagf56dv7yrfhs46@yuggoth.org> On 2018-09-28 13:58:52 -0400 (-0400), William M Edmonds wrote: > Doug Hellmann wrote on 09/26/2018 06:29:11 PM: > > > * We do not want to set the override once in testenv, because that > > breaks the more specific versions used in default environments like > > py35 and py36 (at least under older versions of tox). > > > I assume that something like > https://git.openstack.org/cgit/openstack/nova-powervm/commit/?id=fa64a93c965e6a6692711962ad6584534da81695 > should be a perfectly acceptable alternative in at least some cases. > Agreed? I believe the confusion is that ignore_basepython_conflict didn't appear in a release of tox until after we started patching projects for this effort (in fact it was added to tox in part because we discovered the issue in originally attempting to use basepython globally). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Fri Sep 28 18:42:17 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 28 Sep 2018 14:42:17 -0400 Subject: [openstack-dev] [goals][python3][heat][manila][qinling][zaqar][magnum][keystone][congress] switching python package jobs In-Reply-To: References: Message-ID: Doug Hellmann writes: > I think we are ready to go ahead and switch all of the python packaging > jobs to the new set defined in the publish-to-pypi-python3 template > [1]. We still have some cleanup patches for projects that have not > completed their zuul migration, but there are only a few and rebasing > those will be easy enough. > > The template adds a new check job that runs when any files related to > packaging are changed (readme, setup, etc.). Otherwise it switches from > the python2-based PyPI job to use python3. > > I have the patch to switch all official projects ready in [2]. > > Doug > > [1] http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/project-templates.yaml#n218 > [2] https://review.openstack.org/#/c/598323/ This change is now in place. The Ironic team discovered one issue, and the fix is proposed as https://review.openstack.org/606152 This change has also reopened the question of how to publish some of the projects for which we do not own names on PyPI. I registered manila, qinling, and zaqar-ui by uploading Rocky series releases of those projects and then added openstackci as an owner so we can upload new packages this cycle. I asked the owners of the name "heat" to allow us to use it, and they rejected the request. So, I proposed a change to heat to update the sdist name to "openstack-heat". * https://review.openstack.org/606160 We don't own "magnum" but there is already an "openstack-magnum" set up with old releases, so I have proposed a change to the magnum repo to change the dist name there, so we can resume using it. * https://review.openstack.org/606162 I have filed requests with the maintainers of PyPI to claim the names "keystone" and "congress". That may take some time. Please let me know if you're willing to simply use "openstack-keystone" and "openstack-congress" instead. I will take care of configuring PyPI and proposing the patch to update your setup.cfg (that way you can approve the change). * https://github.com/pypa/warehouse/issues/4770 * https://github.com/pypa/warehouse/issues/4771 Doug From openstack at nemebean.com Fri Sep 28 18:44:44 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 28 Sep 2018 13:44:44 -0500 Subject: [openstack-dev] [goal][python3] week 7 update In-Reply-To: <20180928183808.dagf56dv7yrfhs46@yuggoth.org> References: <20180928183808.dagf56dv7yrfhs46@yuggoth.org> Message-ID: On 9/28/18 1:38 PM, Jeremy Stanley wrote: > On 2018-09-28 13:58:52 -0400 (-0400), William M Edmonds wrote: >> Doug Hellmann wrote on 09/26/2018 06:29:11 PM: >> >>> * We do not want to set the override once in testenv, because that >>> breaks the more specific versions used in default environments like >>> py35 and py36 (at least under older versions of tox). >> >> >> I assume that something like >> https://git.openstack.org/cgit/openstack/nova-powervm/commit/?id=fa64a93c965e6a6692711962ad6584534da81695 >> should be a perfectly acceptable alternative in at least some cases. >> Agreed? > > I believe the confusion is that ignore_basepython_conflict didn't > appear in a release of tox until after we started patching projects > for this effort (in fact it was added to tox in part because we > discovered the issue in originally attempting to use basepython > globally). Yeah, if you're okay with requiring tox 3.1+ then you can use that instead. We've been avoiding it for now in other projects because some of the distros aren't shipping tox 3.1 yet and some people prefer not to mix distro Python packages and pip ones. At some point I expect we'll migrate everything to the new behavior though. From doug at doughellmann.com Fri Sep 28 18:51:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 28 Sep 2018 14:51:23 -0400 Subject: [openstack-dev] [goal][python3] week 7 update In-Reply-To: <20180928183808.dagf56dv7yrfhs46@yuggoth.org> References: <20180928183808.dagf56dv7yrfhs46@yuggoth.org> Message-ID: Jeremy Stanley writes: > On 2018-09-28 13:58:52 -0400 (-0400), William M Edmonds wrote: >> Doug Hellmann wrote on 09/26/2018 06:29:11 PM: >> >> > * We do not want to set the override once in testenv, because that >> > breaks the more specific versions used in default environments like >> > py35 and py36 (at least under older versions of tox). >> >> >> I assume that something like >> https://git.openstack.org/cgit/openstack/nova-powervm/commit/?id=fa64a93c965e6a6692711962ad6584534da81695 >> should be a perfectly acceptable alternative in at least some cases. >> Agreed? > > I believe the confusion is that ignore_basepython_conflict didn't > appear in a release of tox until after we started patching projects > for this effort (in fact it was added to tox in part because we > discovered the issue in originally attempting to use basepython > globally). Right. The scripted patches work with older versions of tox as well. They also have the benefit of only changing the environments into which the new setting is injected, which means if you have a py27-do-something-random environment it isn't going to suddenly start using python 3 instead of python 2.7. The thing we care about for the goal is ensuring that the required jobs run under python 3. Teams are, as always, completely free to choose alternative implementations if they are willing to update the patches (or write alternative ones). Doug From lbragstad at gmail.com Fri Sep 28 18:54:01 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 28 Sep 2018 13:54:01 -0500 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki wrote: > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > wrote: > > > > Ideally I would like to see it in the form of least specific to most > specific. But more importantly in a way that there is no additional > delimiters between the service type and the resource. Finally, I do not > like the change of plurality depending on action type. > > > > I propose we consider > > > > ::[:] > > > > Example for keystone (note, action names below are strictly examples I > am fine with whatever form those actions take): > > identity:projects:create > > identity:projects:delete > > identity:projects:list > > identity:projects:get > > > > It keeps things simple and consistent when you're looking through > overrides / defaults. > > --Morgan > +1 -- I think the ordering if `resource` comes before > `action|subaction` will be more clean. > ++ These are excellent points. I especially like being able to omit the convention about plurality. Furthermore, I'd like to add that I think we should make the resource singular (e.g., project instead or projects). For example: compute:server:list compute:server:update compute:server:create compute:server:delete compute:server:action:reboot compute:server:action:confirm_resize (or confirm-resize) Otherwise, someone might mistake compute:servers:get, as "list". This is ultra-nick-picky, but something I thought of when seeing the usage of "get_all" in policy names in favor of "list." In summary, the new convention based on the most recent feedback should be: *::[:]* Rules: - service-type is always defined in the service types authority - resources are always singular Thanks to all for sticking through this tedious discussion. I appreciate it. > > /R > > Harry > > > > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad > wrote: > >> > >> Bumping this thread again and proposing two conventions based on the > discussion here. I propose we decide on one of the two following > conventions: > >> > >> :: > >> > >> or > >> > >> :_ > >> > >> Where is the corresponding service type of the project > [0], and is either create, get, list, update, or delete. I think > decoupling the method from the policy name should aid in consistency, > regardless of the underlying implementation. The HTTP method specifics can > still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > >> > >> I think the plurality of the resource should default to what makes > sense for the operation being carried out (e.g., list:foobars, > create:foobar). > >> > >> I don't mind the first one because it's clear about what the delimiter > is and it doesn't look weird when projects have something like: > >> > >> ::: > >> > >> If folks are ok with this, I can start working on some documentation > that explains the motivation for this. Afterward, we can figure out how we > want to track this work. > >> > >> What color do you want the shed to be? > >> > >> [0] https://service-types.openstack.org/service-types.json > >> [1] > https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule > >> > >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad > wrote: > >>> > >>> > >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann < > gmann at ghanshyammann.com> wrote: > >>>> > >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt < > john at johngarbutt.com> wrote ---- > >>>> > tl;dr+1 consistent names > >>>> > I would make the names mirror the API... because the Operator > setting them knows the API, not the codeIgnore the crazy names in Nova, I > certainly hate them > >>>> > >>>> Big +1 on consistent naming which will help operator as well as > developer to maintain those. > >>>> > >>>> > > >>>> > Lance Bragstad wrote: > >>>> > > I'm curious if anyone has context on the "os-" part of the > format? > >>>> > > >>>> > My memory of the Nova policy mess...* Nova's policy rules > traditionally followed the patterns of the code > >>>> > ** Yes, horrible, but it happened.* The code used to have the > OpenStack API and the EC2 API, hence the "os"* API used to expand with > extensions, so the policy name is often based on extensions** note most of > the extension code has now gone, including lots of related policies* Policy > in code was focused on getting us to a place where we could rename policy** > Whoop whoop by the way, it feels like we are really close to something > sensible now! > >>>> > Lance Bragstad wrote: > >>>> > Thoughts on using create, list, update, and delete as opposed to > post, get, put, patch, and delete in the naming convention? > >>>> > I could go either way as I think about "list servers" in the > API.But my preference is for the URL stub and POST, GET, etc. > >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad < > lbragstad at gmail.com> wrote:If we consider dropping "os", should we > entertain dropping "api", too? Do we have a good reason to keep "api"?I > wouldn't be opposed to simple service types (e.g "compute" or > "loadbalancer"). > >>>> > +1The API is known as "compute" in api-ref, so the policy should > be for "compute", etc. > >>>> > >>>> Agree on mapping the policy name with api-ref as much as possible. > Other than policy name having 'os-', we have 'os-' in resource name also in > nova API url like /os-agents, /os-aggregates etc (almost every resource > except servers , flavors). As we cannot get rid of those from API url, we > need to keep the same in policy naming too? or we can have policy name like > compute:agents:create/post but that mismatch from api-ref where agents > resource url is os-agents. > >>> > >>> > >>> Good question. I think this depends on how the service does policy > enforcement. > >>> > >>> I know we did something like this in keystone, which required policy > names and method names to be the same: > >>> > >>> "identity:list_users": "..." > >>> > >>> Because the initial implementation of policy enforcement used a > decorator like this: > >>> > >>> from keystone import controller > >>> > >>> @controller.protected > >>> def list_users(self): > >>> ... > >>> > >>> Having the policy name the same as the method name made it easier for > the decorator implementation to resolve the policy needed to protect the > API because it just looked at the name of the wrapped method. The advantage > was that it was easy to implement new APIs because you only needed to add a > policy, implement the method, and make sure you decorate the implementation. > >>> > >>> While this worked, we are moving away from it entirely. The decorator > implementation was ridiculously complicated. Only a handful of keystone > developers understood it. With the addition of system-scope, it would have > only become more convoluted. It also enables a much more copy-paste pattern > (e.g., so long as I wrap my method with this decorator implementation, > things should work right?). Instead, we're calling enforcement within the > controller implementation to ensure things are easier to understand. It > requires developers to be cognizant of how different token types affect the > resources within an API. That said, coupling the policy name to the method > name is no longer a requirement for keystone. > >>> > >>> Hopefully, that helps explain why we needed them to match. > >>> > >>>> > >>>> > >>>> Also we have action API (i know from nova not sure from other > services) like POST /servers/{server_id}/action {addSecurityGroup} and > their current policy name is all inconsistent. few have policy name > including their resource name like > "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in > policy name like "os_compute_api:os-admin-actions:reset_state" and few has > direct action name like "os_compute_api:os-console-output" > >>> > >>> > >>> Since the actions API relies on the request body and uses a single > HTTP method, does it make sense to have the HTTP method in the policy name? > It feels redundant, and we might be able to establish a convention that's > more meaningful for things like action APIs. It looks like cinder has a > similar pattern [0]. > >>> > >>> [0] > https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action > >>> > >>>> > >>>> > >>>> May be we can make them consistent with > :: or any better opinion. > >>>> > >>>> > From: Lance Bragstad > The topic of having > consistent policy names has popped up a few times this week. > >>>> > > >>>> > I would love to have this nailed down before we go through all the > policy rules again. In my head I hope in Nova we can go through each policy > rule and do the following: > >>>> > * move to new consistent policy name, deprecate existing name* > hardcode scope check to project, system or user** (user, yes... keypairs, > yuck, but its how they work)** deprecate in rule scope checks, which are > largely bogus in Nova anyway* make read/write/admin distinction** therefore > adding the "noop" role, amount other things > >>>> > >>>> + policy granularity. > >>>> > >>>> It is good idea to make the policy improvement all together and for > all rules as you mentioned. But my worries is how much load it will be on > operator side to migrate all policy rules at same time? What will be the > deprecation period etc which i think we can discuss on proposed spec - > https://review.openstack.org/#/c/547850 > >>> > >>> > >>> Yeah, that's another valid concern. I know at least one operator has > weighed in already. I'm curious if operators have specific input here. > >>> > >>> It ultimately depends on if they override existing policies or not. If > a deployment doesn't have any overrides, it should be a relatively simple > change for operators to consume. > >>> > >>>> > >>>> > >>>> > >>>> -gmann > >>>> > >>>> > Thanks,John > __________________________________________________________________________ > >>>> > OpenStack Development Mailing List (not for usage questions) > >>>> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > > >>>> > >>>> > >>>> > >>>> > __________________________________________________________________________ > >>>> OpenStack Development Mailing List (not for usage questions) > >>>> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hrybacki at redhat.com Fri Sep 28 19:31:13 2018 From: hrybacki at redhat.com (Harry Rybacki) Date: Fri, 28 Sep 2018 15:31:13 -0400 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: On Fri, Sep 28, 2018 at 2:54 PM Lance Bragstad wrote: > > > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki wrote: >> >> On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg >> wrote: >> > >> > Ideally I would like to see it in the form of least specific to most specific. But more importantly in a way that there is no additional delimiters between the service type and the resource. Finally, I do not like the change of plurality depending on action type. >> > >> > I propose we consider >> > >> > ::[:] >> > >> > Example for keystone (note, action names below are strictly examples I am fine with whatever form those actions take): >> > identity:projects:create >> > identity:projects:delete >> > identity:projects:list >> > identity:projects:get >> > >> > It keeps things simple and consistent when you're looking through overrides / defaults. >> > --Morgan >> +1 -- I think the ordering if `resource` comes before >> `action|subaction` will be more clean. > > > ++ > > These are excellent points. I especially like being able to omit the convention about plurality. Furthermore, I'd like to add that I think we should make the resource singular (e.g., project instead or projects). For example: > > compute:server:list > compute:server:update > compute:server:create > compute:server:delete > compute:server:action:reboot > compute:server:action:confirm_resize (or confirm-resize) > > Otherwise, someone might mistake compute:servers:get, as "list". This is ultra-nick-picky, but something I thought of when seeing the usage of "get_all" in policy names in favor of "list." > > In summary, the new convention based on the most recent feedback should be: > > ::[:] > > Rules: > > service-type is always defined in the service types authority > resources are always singular > ++ plurality can be determined by related action. +++ for removing possible ambiguity. > Thanks to all for sticking through this tedious discussion. I appreciate it. > Thanks for pushing the conversation, Lance! >> >> >> /R >> >> Harry >> > >> > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad wrote: >> >> >> >> Bumping this thread again and proposing two conventions based on the discussion here. I propose we decide on one of the two following conventions: >> >> >> >> :: >> >> >> >> or >> >> >> >> :_ >> >> >> >> Where is the corresponding service type of the project [0], and is either create, get, list, update, or delete. I think decoupling the method from the policy name should aid in consistency, regardless of the underlying implementation. The HTTP method specifics can still be relayed using oslo.policy's DocumentedRuleDefault object [1]. >> >> >> >> I think the plurality of the resource should default to what makes sense for the operation being carried out (e.g., list:foobars, create:foobar). >> >> >> >> I don't mind the first one because it's clear about what the delimiter is and it doesn't look weird when projects have something like: >> >> >> >> ::: >> >> >> >> If folks are ok with this, I can start working on some documentation that explains the motivation for this. Afterward, we can figure out how we want to track this work. >> >> >> >> What color do you want the shed to be? >> >> >> >> [0] https://service-types.openstack.org/service-types.json >> >> [1] https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule >> >> >> >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad wrote: >> >>> >> >>> >> >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann wrote: >> >>>> >> >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt wrote ---- >> >>>> > tl;dr+1 consistent names >> >>>> > I would make the names mirror the API... because the Operator setting them knows the API, not the codeIgnore the crazy names in Nova, I certainly hate them >> >>>> >> >>>> Big +1 on consistent naming which will help operator as well as developer to maintain those. >> >>>> >> >>>> > >> >>>> > Lance Bragstad wrote: >> >>>> > > I'm curious if anyone has context on the "os-" part of the format? >> >>>> > >> >>>> > My memory of the Nova policy mess...* Nova's policy rules traditionally followed the patterns of the code >> >>>> > ** Yes, horrible, but it happened.* The code used to have the OpenStack API and the EC2 API, hence the "os"* API used to expand with extensions, so the policy name is often based on extensions** note most of the extension code has now gone, including lots of related policies* Policy in code was focused on getting us to a place where we could rename policy** Whoop whoop by the way, it feels like we are really close to something sensible now! >> >>>> > Lance Bragstad wrote: >> >>>> > Thoughts on using create, list, update, and delete as opposed to post, get, put, patch, and delete in the naming convention? >> >>>> > I could go either way as I think about "list servers" in the API.But my preference is for the URL stub and POST, GET, etc. >> >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad wrote:If we consider dropping "os", should we entertain dropping "api", too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple service types (e.g "compute" or "loadbalancer"). >> >>>> > +1The API is known as "compute" in api-ref, so the policy should be for "compute", etc. >> >>>> >> >>>> Agree on mapping the policy name with api-ref as much as possible. Other than policy name having 'os-', we have 'os-' in resource name also in nova API url like /os-agents, /os-aggregates etc (almost every resource except servers , flavors). As we cannot get rid of those from API url, we need to keep the same in policy naming too? or we can have policy name like compute:agents:create/post but that mismatch from api-ref where agents resource url is os-agents. >> >>> >> >>> >> >>> Good question. I think this depends on how the service does policy enforcement. >> >>> >> >>> I know we did something like this in keystone, which required policy names and method names to be the same: >> >>> >> >>> "identity:list_users": "..." >> >>> >> >>> Because the initial implementation of policy enforcement used a decorator like this: >> >>> >> >>> from keystone import controller >> >>> >> >>> @controller.protected >> >>> def list_users(self): >> >>> ... >> >>> >> >>> Having the policy name the same as the method name made it easier for the decorator implementation to resolve the policy needed to protect the API because it just looked at the name of the wrapped method. The advantage was that it was easy to implement new APIs because you only needed to add a policy, implement the method, and make sure you decorate the implementation. >> >>> >> >>> While this worked, we are moving away from it entirely. The decorator implementation was ridiculously complicated. Only a handful of keystone developers understood it. With the addition of system-scope, it would have only become more convoluted. It also enables a much more copy-paste pattern (e.g., so long as I wrap my method with this decorator implementation, things should work right?). Instead, we're calling enforcement within the controller implementation to ensure things are easier to understand. It requires developers to be cognizant of how different token types affect the resources within an API. That said, coupling the policy name to the method name is no longer a requirement for keystone. >> >>> >> >>> Hopefully, that helps explain why we needed them to match. >> >>> >> >>>> >> >>>> >> >>>> Also we have action API (i know from nova not sure from other services) like POST /servers/{server_id}/action {addSecurityGroup} and their current policy name is all inconsistent. few have policy name including their resource name like "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in policy name like "os_compute_api:os-admin-actions:reset_state" and few has direct action name like "os_compute_api:os-console-output" >> >>> >> >>> >> >>> Since the actions API relies on the request body and uses a single HTTP method, does it make sense to have the HTTP method in the policy name? It feels redundant, and we might be able to establish a convention that's more meaningful for things like action APIs. It looks like cinder has a similar pattern [0]. >> >>> >> >>> [0] https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action >> >>> >> >>>> >> >>>> >> >>>> May be we can make them consistent with :: or any better opinion. >> >>>> >> >>>> > From: Lance Bragstad > The topic of having consistent policy names has popped up a few times this week. >> >>>> > >> >>>> > I would love to have this nailed down before we go through all the policy rules again. In my head I hope in Nova we can go through each policy rule and do the following: >> >>>> > * move to new consistent policy name, deprecate existing name* hardcode scope check to project, system or user** (user, yes... keypairs, yuck, but its how they work)** deprecate in rule scope checks, which are largely bogus in Nova anyway* make read/write/admin distinction** therefore adding the "noop" role, amount other things >> >>>> >> >>>> + policy granularity. >> >>>> >> >>>> It is good idea to make the policy improvement all together and for all rules as you mentioned. But my worries is how much load it will be on operator side to migrate all policy rules at same time? What will be the deprecation period etc which i think we can discuss on proposed spec - https://review.openstack.org/#/c/547850 >> >>> >> >>> >> >>> Yeah, that's another valid concern. I know at least one operator has weighed in already. I'm curious if operators have specific input here. >> >>> >> >>> It ultimately depends on if they override existing policies or not. If a deployment doesn't have any overrides, it should be a relatively simple change for operators to consume. >> >>> >> >>>> >> >>>> >> >>>> >> >>>> -gmann >> >>>> >> >>>> > Thanks,John __________________________________________________________________________ >> >>>> > OpenStack Development Mailing List (not for usage questions) >> >>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> > >> >>>> >> >>>> >> >>>> >> >>>> __________________________________________________________________________ >> >>>> OpenStack Development Mailing List (not for usage questions) >> >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cboylan at sapwetik.org Fri Sep 28 20:12:40 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 28 Sep 2018 13:12:40 -0700 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> Message-ID: <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> On Wed, Sep 19, 2018, at 12:11 PM, Clark Boylan wrote: > Hello everyone, > > You may have noticed there is a large Zuul job backlog and changes are > not getting CI reports as quickly as you might expect. There are several > factors interacting with each other to make this the case. The short > version is that one of our clouds is performing upgrades and has been > removed from service, and we have a large number of gate failures which > cause things to reset and start over. We have fewer resources than > normal and are using them inefficiently. Zuul is operating as expected. > > Continue reading if you'd like to understand the technical details and > find out how you can help make this better. > > Zuul gates related projects in shared queues. Changes enter these queues > and are ordered in a speculative future state that Zuul assumes will > pass because multiple humans have reviewed the changes and said they are > good (also they had to pass check testing first). Problems arise when > tests fail forcing Zuul to evict changes from the speculative future > state, build a new state, then start jobs over again for this new > future. > > Typically this doesn't happen often and we merge many changes at a time, > quickly pushing code into our repos. Unfortunately, the results are > painful when we fail often as we end up rebuilding future states and > restarting jobs often. Currently we have the gate and release jobs set > to the highest priority as well so they run jobs before other queues. > This means the gate can starve other work if it is flaky. We've > configured things this way because the gate is not supposed to be flaky > since we've reviewed things and already passed check testing. One of the > tools we have in place to make this less painful is each gate queue > operates on a window that grows and shrinks similar to how TCP > slowstart. As changes merge we increase the size of the window and when > they fail to merge we decrease it. This reduces the size of the future > state that must be rebuilt and retested on failure when things are > persistently flaky. > > The best way to make this better is to fix the bugs in our software, > whether that is in the CI system itself or the software being tested. > The first step in doing that is to identify and track the bugs that we > are dealing with. We have a tool called elastic-recheck that does this > using indexed logs from the jobs. The idea there is to go through the > list of unclassified failures [0] and fingerprint them so that we can > track them [1]. With that data available we can then prioritize fixing > the bugs that have the biggest impact. > > Unfortunately, right now our classification rate is very poor (only > 15%), which makes it difficult to know what exactly is causing these > failures. Mriedem and I have quickly scanned the unclassified list, and > it appears there is a db migration testing issue causing these tests to > timeout across several projects. Mriedem is working to get this > classified and tracked which should help, but we will also need to fix > the bug. On top of that it appears that Glance has flaky functional > tests (both python2 and python3) which are causing resets and should be > looked into. > > If you'd like to help, let mriedem or myself know and we'll gladly work > with you to get elasticsearch queries added to elastic-recheck. We are > likely less help when it comes to fixing functional tests in Glance, but > I'm happy to point people in the right direction for that as much as I > can. If you can take a few minutes to do this before/after you issue a > recheck it does help quite a bit. > > One general thing I've found would be helpful is if projects can clean > up the deprecation warnings in their log outputs. The persistent > "WARNING you used the old name for a thing" messages make the logs large > and much harder to read to find the actual failures. > > As a final note this is largely targeted at the OpenStack Integrated > gate (Nova, Glance, Cinder, Keystone, Swift, Neutron) since that appears > to be particularly flaky at the moment. The Zuul behavior applies to > other gate pipelines (OSA, Tripleo, Airship, etc) as does elastic- > recheck and related tooling. If you find your particular pipeline is > flaky I'm more than happy to help in that context as well. > > [0] http://status.openstack.org/elastic-recheck/data/integrated_gate.html > [1] http://status.openstack.org/elastic-recheck/gate.html I was asked to write a followup to this as the long Zuul queues have persisted through this week. Largely because the situation from last week hasn't changed much. We were down the upgraded cloud region while we worked around a network configuration bug, then once that was addressed we ran into neutron port assignment and deletion issues. We think these are both fixed and we are running in this region again as of today. Other good news is our classification rate is up significantly. We can use that information to go through the top identified gate bugs: Network Connectivity issues to test nodes [2]. This is the current top of the list, but I think its impact is relatively small. What is happening here is jobs fail to connect to their test nodes early in the pre-run playbook and then fail. Zuul will rerun these jobs for us because they failed in the pre-run step. Prior to zuulv3 we had nodepool run a ready script before marking test nodes as ready, this script would've caught and filtered out these broken network nodes early. We now notice them late during the pre-run of a job. Pip fails to find distribution for package [3]. Earlier in the week we had the in region mirror fail in two different regions for unrelated errors. These mirrors were fixed and the only other hits for this bug come from Ara which tried to install the 'black' package on python3.5 but this package requires python>=3.6. yum, no more mirrors to try [4]. At first glance this appears to be an infrastructure issue because the mirror isn't serving content to yum. On further investigation it turned out to be a DNS resolution issue caused by the installation of designate in the tripleo jobs. Tripleo is aware of this issue and working to correct it. Stackviz failing on py3 [5]. This is a real bug in stackviz caused by subunit data being binary not utf8 encoded strings. I've written a fix for this problem at https://review.openstack.org/606184, but in doing so found that this was a known issue back in March and there was already a proposed fix, https://review.openstack.org/#/c/555388/3. It would be helpful if the QA team could care for this project and get a fix in. Otherwise, we should consider disabling stackviz on our tempest jobs (though the output from stackviz is often useful). There are other bugs being tracked by e-r. Some are bugs in the openstack software and I'm sure some are also bugs in the infrastructure. I have not yet had the time to work through the others though. It would be helpful if project teams could prioritize the debugging and fixing of these issues though. [2] http://status.openstack.org/elastic-recheck/gate.html#1793370 [3] http://status.openstack.org/elastic-recheck/gate.html#1449136 [4] http://status.openstack.org/elastic-recheck/gate.html#1708704 [5] http://status.openstack.org/elastic-recheck/gate.html#1758054 Clark From sean.mcginnis at gmx.com Fri Sep 28 20:33:18 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 28 Sep 2018 15:33:18 -0500 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: <20180928203318.GA3769@sm-workstation> On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote: > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki wrote: > > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > > wrote: > > > > > > Ideally I would like to see it in the form of least specific to most > > specific. But more importantly in a way that there is no additional > > delimiters between the service type and the resource. Finally, I do not > > like the change of plurality depending on action type. > > > > > > I propose we consider > > > > > > ::[:] > > > > > > Example for keystone (note, action names below are strictly examples I > > am fine with whatever form those actions take): > > > identity:projects:create > > > identity:projects:delete > > > identity:projects:list > > > identity:projects:get > > > > > > It keeps things simple and consistent when you're looking through > > overrides / defaults. > > > --Morgan > > +1 -- I think the ordering if `resource` comes before > > `action|subaction` will be more clean. > > > Great idea. This is looking better and better. From openstack at fried.cc Fri Sep 28 20:36:24 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 28 Sep 2018 15:36:24 -0500 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> Message-ID: <7b2ae14e-5f3d-ff60-3ebe-8b8c62ee5994@fried.cc> On 09/28/2018 12:19 PM, Chris Dent wrote: > On Fri, 28 Sep 2018, Jay Pipes wrote: > >> On 09/28/2018 09:25 AM, Eric Fried wrote: >>> It's time somebody said this. > > Yes, a useful topic, I think. > >>> Every time we turn a corner or look under a rug, we find another use >>> case for provider traits in placement. But every time we have to have >>> the argument about whether that use case satisfies the original >>> "intended purpose" of traits. >>> >>> That's only reason I've ever been able to glean: that it (whatever "it" >>> is) wasn't what the architects had in mind when they came up with the >>> idea of traits. >> >> Don't pussyfoot around things. It's me you're talking about, Eric. You >> could just ask me instead of passive-aggressively posting to the list >> like this. > > It's not just you. Ed and I have also expressed some fairly strong > statement about how traits are "supposed" to be used and I would > guess that from Eric's perspective all three of us (amongst others) > have some form of architectural influence. Since it takes a village > and all that. Correct. I certainly wasn't talking about Jay specifically. I also wanted people other than placement cores/architects to participate in the discussion (thanks Julia and Zane). >> They aren't arbitrary. They are there for a reason: a trait is a >> boolean capability. It describes something that either a provider is >> capable of supporting or it isn't. > > This is somewhat (maybe even only slightly) different from what I > think the definition of a trait is, and that nuance may be relevant. > > I describe a trait as a "quality that a resource provider has" (the > car is blue). This contrasts with a resource class which is a > "quantity that a resource provider has" (the car has 4 doors). Yes, this. I don't want us to go off in the weeds about the reason or relevance of the choice of name, but "trait" is a superset of "capability" and easily encompasses "BLUE" or "PHYSNET_PUBLIC" or "OWNED_BY_NEUTRON" or "XYZ_BITSTREAM" or "PCI_ADDRESS_01_AB_23_CD" or "RAID5". > Our implementation is pretty much exactly that ^. We allow > clients to ask "give me things that have qualities x, y, z, not > qualities a, b, c, and quanities of G of 5 and H of 7". > > Add in aggregates and we have exactly what you say: > >> * Does the provider have *capacity* for the requested resources? >> * Does the provider have the required (or forbidden) *capabilities*? >> * Does the provider belong to some group? > > The nuance of difference is that your description of *capabilities* > seems more narrow than my description of *qualities* (aka > characteristics). You've got something fairly specific in mind, as a > way of constraining the profusion of noise that has happened with > how various kinds of information about resources of all sorts is > managed in OpenStack, as you describe in your message. > > I do not think it should be placement's job to control that noise. > It should be placement's job to provide a very strict contract about > what you can do with a trait: > > * create it, if necessary > * assign it to one or more resource providers > * ask for providers that either have it > * ... or do not have it > > That's all. Placement _code_ should _never_ be aware of the value of > a trait (except for the magical MISC_SHARES...). It should never > become possible to regex on traits or do comparisons > (required= >> If we want to add further constraints to the placement allocation >> candidates request that ask things like: >> >> * Does the provider have version 1.22.61821 of BIOS firmware from >> Marvell installed on it? > > That's a quality of the provider in a moment. > >> * Does the provider support an FPGA that has had an OVS program >> flashed to it in the last 20 days? > > If you squint, so is this. > >> * Does the provider belong to physical network "corpnet" and also >> support creation of virtual NICs of type either "DIRECT" or "NORMAL"? > > And these. > > But at least some of them are dynamic rather than some kind of > platonic ideal associated with the resource provider. > > I don't think placement should be concerned about temporal aspects > of traits. If we can't write a web service that can handle setting > lots of traits every second of every day, we should go home. If > clients of placement want to set weird traits, more power to them. > > However, if clients of placement (such as nova) which are being the > orchestrator of resource providers manipulated by multiple systems > (neutron, cinder, ironic, cyborg, etc) wish to set some constraints > on how and what traits can do and mean, then that is up to them. > > nova-scheduler is the thing that is doing `GET > /allocation_candidates` for those multiple system. It presumably > should have some say in what traits it is willing to express and > use. Right, this is where it's getting sticky. I feel like the push-back comes from people wearing their placement hats saying "you can't (ab)use placement like this, even though it would totally work" versus people wearing their nova/ironic/whatever hats saying "we shouldn't favor this implementation because there's something fundamentally wrong with it and/or this other way would be better". > But the placement service doesn't and shouldn't care. > >> Then we should add a data model that allow providers to be decorated >> with key/value (or more complex than key/value) information where we >> can query for those kinds of constraints without needing to encode all >> sorts of non-binary bits of information into a capability string. > > Let's never do this, please. The three capabilities (ha!) of > placement that you listed above ("Does the...") are very powerful as > is and have a conceptual integrity that's really quite awesome. I > think keeping it contained and constrained in very "simple" concepts > like that was stroke of genius you (Jay) made and I'd hope we can > keep it clean like that. So here it is. Two of the top influencers in placement, one saying we shouldn't overload traits, the other saying we shouldn't add a primitive that would obviate the need for that. Historically, this kind of disagreement seems to result in an impasse: neither thing happens and those who would benefit are forced to find a workaround or punt. Frankly, I don't particularly care which way we go; I just want to be able to do the things. > If we weren't a multiple-service oriented system, and instead had > some kind of k8s-like etcd-like > keeper-of-all-the-info-about-everything, then sure, having what we > currently model as resource providers be a giant blob of metadata > (with quantities, qualitiies, and key-values) that is an authority > for the entire system might make some kind of sense. > > But we don't. If we wanted to migrate to having something like that, > using placement as the trojan horse for such a change, either with > intent or by accident, would be unfortunate. > >> Propose such a thing and I'll gladly support it. But I won't support >> bastardizing the simple concept of a boolean capability just because >> we don't want to change the API or database schema. > > For me, it is not a matter of not wanting to change the API or the > database schema. It's about not wanting to expand the concepts, and > thus the purpose, of the system. It's about wanting to keep focus > and functionality narrow so we can have a target which is "maturity" > and know when we're there. > > My summary: Traits are symbols that are 255 characters long that are > associated with a resource provider. It's possible to query for > resource providers that have or do not have a specific trait. This > has the effect of making the meaning of a trait a descriptor of the > resource provider. What the descriptor signifies is up to the thing > creating and using the resource provider, not placement. We need to > harden that contract and stick to it. Placement is like a common > carrier, it doesn't care what's in the box. > > /me cues brad pitt > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at fried.cc Fri Sep 28 20:42:23 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 28 Sep 2018 15:42:23 -0500 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: <1538145718.22269.0@smtp.office365.com> References: <1538145718.22269.0@smtp.office365.com> Message-ID: On 09/28/2018 09:41 AM, Balázs Gibizer wrote: > > > On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried wrote: >> It's time somebody said this. >> >> Every time we turn a corner or look under a rug, we find another use >> case for provider traits in placement. But every time we have to have >> the argument about whether that use case satisfies the original >> "intended purpose" of traits. >> >> That's only reason I've ever been able to glean: that it (whatever "it" >> is) wasn't what the architects had in mind when they came up with the >> idea of traits. We're not even talking about anything that would require >> changes to the placement API. Just, "Oh, that's not a *capability* - >> shut it down." >> >> Bubble wrap was originally intended as a textured wallpaper and a >> greenhouse insulator. Can we accept the fact that traits have (many, >> many) uses beyond marking capabilities, and quit with the arbitrary >> restrictions? > > How far are we willing to go? Does an arbitrary (key: value) pair > encoded in a trait name like key_`str(value)` (e.g. CURRENT_TEMPERATURE: > 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in > placement? Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but TEMPERATURE_ is not. This thread isn't about setting these parameters; it's about getting us to a point where we can discuss a question just like this one without running up against: "That's a hard no, because you shouldn't encode key/value pairs in traits." "Oh, why's that?" "Because that's not what we intended when we created traits." "But it would work, and the alternatives are way harder." "-1" "But..." "-1" > > Cheers, > gibi > >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Fri Sep 28 21:01:34 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 28 Sep 2018 16:01:34 -0500 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> Message-ID: On 9/28/2018 3:12 PM, Clark Boylan wrote: > I was asked to write a followup to this as the long Zuul queues have persisted through this week. Largely because the situation from last week hasn't changed much. We were down the upgraded cloud region while we worked around a network configuration bug, then once that was addressed we ran into neutron port assignment and deletion issues. We think these are both fixed and we are running in this region again as of today. > > Other good news is our classification rate is up significantly. We can use that information to go through the top identified gate bugs: > > Network Connectivity issues to test nodes [2]. This is the current top of the list, but I think its impact is relatively small. What is happening here is jobs fail to connect to their test nodes early in the pre-run playbook and then fail. Zuul will rerun these jobs for us because they failed in the pre-run step. Prior to zuulv3 we had nodepool run a ready script before marking test nodes as ready, this script would've caught and filtered out these broken network nodes early. We now notice them late during the pre-run of a job. > > Pip fails to find distribution for package [3]. Earlier in the week we had the in region mirror fail in two different regions for unrelated errors. These mirrors were fixed and the only other hits for this bug come from Ara which tried to install the 'black' package on python3.5 but this package requires python>=3.6. > > yum, no more mirrors to try [4]. At first glance this appears to be an infrastructure issue because the mirror isn't serving content to yum. On further investigation it turned out to be a DNS resolution issue caused by the installation of designate in the tripleo jobs. Tripleo is aware of this issue and working to correct it. > > Stackviz failing on py3 [5]. This is a real bug in stackviz caused by subunit data being binary not utf8 encoded strings. I've written a fix for this problem athttps://review.openstack.org/606184, but in doing so found that this was a known issue back in March and there was already a proposed fix,https://review.openstack.org/#/c/555388/3. It would be helpful if the QA team could care for this project and get a fix in. Otherwise, we should consider disabling stackviz on our tempest jobs (though the output from stackviz is often useful). > > There are other bugs being tracked by e-r. Some are bugs in the openstack software and I'm sure some are also bugs in the infrastructure. I have not yet had the time to work through the others though. It would be helpful if project teams could prioritize the debugging and fixing of these issues though. > > [2]http://status.openstack.org/elastic-recheck/gate.html#1793370 > [3]http://status.openstack.org/elastic-recheck/gate.html#1449136 > [4]http://status.openstack.org/elastic-recheck/gate.html#1708704 > [5]http://status.openstack.org/elastic-recheck/gate.html#1758054 Thanks for the update Clark. Another thing this week is the logstash indexing is behind by at least half a day. That's because workers were hitting OOM errors due to giant screen log files that aren't formatted properly so that we only index INFO+ level logs, and were instead trying to index the entire file, which some of which are 33MB *compressed*. So indexing of those identified problematic screen logs has been disabled: https://review.openstack.org/#/c/606197/ I've reported bugs against each related project. -- Thanks, Matt From mriedemos at gmail.com Fri Sep 28 21:07:33 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 28 Sep 2018 16:07:33 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-28 Update Message-ID: <06d2e9df-1ec8-0ce1-5ee2-376b32a8fd7a@gmail.com> There isn't really anything to report this week. There are no new changes up for review that I'm aware of. If your team has posted changes for your project, please update the related task in the story [1]. I'm also waiting for some feedback from glance-minded people about [2]. [1] https://storyboard.openstack.org/#!/story/2003657 [2] http://lists.openstack.org/pipermail/openstack-dev/2018-September/135025.html -- Thanks, Matt From melwittt at gmail.com Fri Sep 28 21:07:44 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 28 Sep 2018 14:07:44 -0700 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <1538145718.22269.0@smtp.office365.com> Message-ID: On Fri, 28 Sep 2018 15:42:23 -0500, Eric Fried wrote: > On 09/28/2018 09:41 AM, Balázs Gibizer wrote: >> >> >> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried wrote: >>> It's time somebody said this. >>> >>> Every time we turn a corner or look under a rug, we find another use >>> case for provider traits in placement. But every time we have to have >>> the argument about whether that use case satisfies the original >>> "intended purpose" of traits. >>> >>> That's only reason I've ever been able to glean: that it (whatever "it" >>> is) wasn't what the architects had in mind when they came up with the >>> idea of traits. We're not even talking about anything that would require >>> changes to the placement API. Just, "Oh, that's not a *capability* - >>> shut it down." >>> >>> Bubble wrap was originally intended as a textured wallpaper and a >>> greenhouse insulator. Can we accept the fact that traits have (many, >>> many) uses beyond marking capabilities, and quit with the arbitrary >>> restrictions? >> >> How far are we willing to go? Does an arbitrary (key: value) pair >> encoded in a trait name like key_`str(value)` (e.g. CURRENT_TEMPERATURE: >> 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in >> placement? > > Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but > TEMPERATURE_ is not. This thread isn't about setting > these parameters; it's about getting us to a point where we can discuss > a question just like this one without running up against: > > "That's a hard no, because you shouldn't encode key/value pairs in traits." > > "Oh, why's that?" > > "Because that's not what we intended when we created traits." > > "But it would work, and the alternatives are way harder." > > "-1" > > "But..." > > "-1" I think it's not so much about the intention when traits were created and more about what UX callers of the API are left with, if we were to recommend representing everything with traits and not providing another API for key-value use cases. We need to think about what the maintenance of their deployments will look like if traits are the only tool we provide. I get that we don't want to put artificial restrictions on how API callers can and can't use the traits API, but will they be left with a manageable experience if that's all that's available? I don't have time right now to come up with a really great example, but I'm thinking along the lines of, can this get out of hand (a la "flavor explosion") for an operator using traits to model what their compute hosts can do? Please forgive the oversimplified example I'm going to try to use to illustrate my concern: We all agree we can have traits for resource providers like: * HAS_SSD * HAS_GPU * HAS_WINDOWS But things get less straightforward when we think of traits like: * HAS_OWNER_CINDER * HAS_OWNER_NEUTRON * HAS_OWNER_CYBORG * HAS_RAID_0 * HAS_RAID_1 * HAS_RAID_5 * HAS_RAID_6 * HAS_RAID_10 * HAS_NUMA_CELL_0 * HAS_NUMA_CELL_1 * HAS_NUMA_CELL_2 * HAS_NUMA_CELL_3 I'm concerned about a lot of repetition here and maintenance headache for operators. That's where the thoughts about whether we should provide something like a key-value construct to API callers where they can instead say: * OWNER=CINDER * RAID=10 * NUMA_CELL=0 for each resource provider. If I'm off base with my example, please let me know. I'm not a placement expert. Anyway, I hope that gives an idea of what I'm thinking about in this discussion. I agree we need to pick a direction and go with it. I'm just trying to look out for the experience operators are going to be using this and maintaining it in their deployments. Cheers, -melanie From jgu at suse.com Fri Sep 28 21:47:11 2018 From: jgu at suse.com (James Gu) Date: Fri, 28 Sep 2018 15:47:11 -0600 Subject: [openstack-dev] Airship linux distro support In-Reply-To: <5BAE2B92.4030409@openstack.org> References: <5BAE2B92.4030409@openstack.org> Message-ID: <5BAEA15F0200006C0003C1AF@prv-mh.provo.novell.com> Hello, I submitted a spec to enable multiple Linux distro capability in Airship and bring in OpenSUSE support in addition to Ubuntu. The spec is at https://review.openstack.org/#/c/601187 and has received positive feedback from the Airship core team on the direction. We wanted to make the effort known to broader audience through the mailing list and sincerely welcome more developers to join us, review the spec/code and/or implement the feature, expand to other Linux distros such as CentOS etc. Thanks, James -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Fri Sep 28 21:51:24 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 28 Sep 2018 17:51:24 -0400 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <1538145718.22269.0@smtp.office365.com> Message-ID: <543894f0-8ee2-68b8-ac02-898a5359a2c6@gmail.com> On 09/28/2018 04:42 PM, Eric Fried wrote: > On 09/28/2018 09:41 AM, Balázs Gibizer wrote: >> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried wrote: >>> It's time somebody said this. >>> >>> Every time we turn a corner or look under a rug, we find another use >>> case for provider traits in placement. But every time we have to have >>> the argument about whether that use case satisfies the original >>> "intended purpose" of traits. >>> >>> That's only reason I've ever been able to glean: that it (whatever "it" >>> is) wasn't what the architects had in mind when they came up with the >>> idea of traits. We're not even talking about anything that would require >>> changes to the placement API. Just, "Oh, that's not a *capability* - >>> shut it down." >>> >>> Bubble wrap was originally intended as a textured wallpaper and a >>> greenhouse insulator. Can we accept the fact that traits have (many, >>> many) uses beyond marking capabilities, and quit with the arbitrary >>> restrictions? >> >> How far are we willing to go? Does an arbitrary (key: value) pair >> encoded in a trait name like key_`str(value)` (e.g. CURRENT_TEMPERATURE: >> 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in >> placement? > > Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but > TEMPERATURE_ is not. That's correct, because you're encoding >1 piece of information into the single string (the fact that it's a temperature *and* the value of that temperature are the two pieces of information encoded into the single string). Now that there's multiple pieces of information encoded in the string the reader of the trait string needs to know how to decode those bits of information, which is exactly what we're trying to avoid doing (because we can see from the ComputeCapabilitiesFilter, the extra_specs mess, and the giant hairball that is the NUMA and CPU pinning "metadata requests" how that turns out). > This thread isn't about setting these parameters; it's about getting > us to a point where we can discuss a question just like this one > without running up against: > > "That's a hard no, because you shouldn't encode key/value pairs in traits." > > "Oh, why's that?" > > "Because that's not what we intended when we created traits." > > "But it would work, and the alternatives are way harder." > > "-1" > > "But..." > > "-I I believe I've articulated a number of times why traits should remain unary pieces of information, and not just said "because that's what we intended when we created traits". I'm tough on this because I've seen the garbage code and unmaintainable mess that not having structurally sound data modeling concepts and information interpretation rules leads to in Nova and I don't want to encourage any more of it. -jay From johnsomor at gmail.com Fri Sep 28 22:05:30 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 28 Sep 2018 15:05:30 -0700 Subject: [openstack-dev] [neutron][lbaas][neutron-lbaas][octavia] Update on the previously announced deprecation of neutron-lbaas and neutron-lbaas-dashboard Message-ID: During the Queens release cycle we announced the deprecation of neutron-lbaas and neutron-lbaas-dashboard[1]. Today we are announcing the expected end date for the neutron-lbaas and neutron-lbaas-dashboard deprecation cycles. During September 2019 or the start of the “U” OpenStack release cycle, whichever comes first, neutron-lbaas and neutron-lbaas-dashboard will be retired. This means the code will be be removed and will not be released as part of the "U" OpenStack release per the infrastructure team’s “retiring a project” process[2]. We continue to maintain a Frequently Asked Questions (FAQ) wiki page to help answer additional questions you may have about this process: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation For more information or if you have additional questions, please see the following resources: The FAQ: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation The Octavia documentation: https://docs.openstack.org/octavia/latest/ Reach out to us via IRC on the Freenode IRC network, channel #openstack-lbaas Weekly Meeting: 20:00 UTC on Wednesdays in #openstack-lbaas on the Freenode IRC network. Sending email to the OpenStack developer mailing list: openstack-dev [at] lists [dot] openstack [dot] org. Please prefix the subject with '[openstack-dev][Octavia]' Thank you for your support and patience during this transition, Michael Johnson Octavia PTL [1] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126836.html [2] https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project From openstack at nemebean.com Fri Sep 28 22:05:48 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 28 Sep 2018 17:05:48 -0500 Subject: [openstack-dev] [oslo] PTG wrapup Message-ID: <8c4edc9f-216e-3f79-889a-811962b20e55@nemebean.com> A bit belated, but here goes: Monday: Had a good discussion in the Keystone room about oslo.limit with some Nova developers. There was quite a bit of discussion around how the callbacks should work for resource usage and cleanup, and the Nova developers took an action to do some prototyping. Also, there was general consensus in the room that user quotas were probably a thing that should go away and we didn't want to spend a lot of time trying to accommodate them. If you have a different viewpoint on that please let someone involved with this know ASAP. In addition, for the first time the topic of how to migrate from project-specific quota code to oslo.limit got some serious discussion. The current proposal is to have projects support both methods for a cycle to allow migration of the data. A Nova spec is planned to detail how that will work. https://etherpad.openstack.org/p/keystone-stein-unified-limits In the afternoon there was also a productive discussion in the API sig room about the healthcheck middleware. Initially it was a lot of "we want this, but no one has time to work on it", but after some more digging into the existing oslo.middleware code it was determined that we might be able to reuse parts of that to reduce the amount of work needed to implement it. This also makes it an easier sell to projects since many already include the old healthcheck middleware and this would be an extension of it. Graham was going to hack on the implementation in his PTG downtime. https://etherpad.openstack.org/p/api-sig-stein-ptg Tuesday: Our scheduled session day. The main points of the discussion were (hopefully) captured in https://etherpad.openstack.org/p/oslo-stein-ptg-planning Highlights: -The oslo.config driver work is continuing. One outcome of the discussion was that we decided to continue to defer the question of how to handle mutable config with drivers. If somebody asks for it then we can revisit. -There was general agreement to proceed with the simple config validator: https://review.openstack.org/#/c/567950/ There is a significantly more complex version of that review out there as well, but it's so large that nobody has had time to review it. The plan is that the added features from that can be added to this simple version in easier-to-digest pieces once the base functionality is there. -The config migration tool is awaiting reviews (I see Doug reviewed it today, thanks!), and then will proceed with phase 2 in which it will try to handle more complex migrations. -oslo.upgradecheck is now a thing. If you don't know what that is, see Matt Riedemann's email updates on the upgrade checkers goal. -There was some extensive discussion around how to add parallel processing to oslo.privsep. The main outcomes were that we needed to get rid of the eventlet dependency from the initial implementation, but we think the rest of the code should already deal with concurrent execution as expected. However, as we are still lacking deep expertise in oslo.privsep since Gus left (help wanted!), it is TBD whether we are right. :-) -A pluggable policy spec and some initial patches are proposed and need reviews. One of these days I will have time to do that. Wednesday: Had a good discussion about migrating Oslo to Storyboard. As you may have noticed, that discussion has continued on the mailing list so check out the [storyboard] tagged threads for details on where that stands. If you want to kick the tires of the test import you can do so here: https://storyboard-dev.openstack.org/#!/story/list?project_group_id=74 Thursday: Discussion in the TripleO room about integrating the config drivers work. It sounded like they had a plan to implement support for them when they are available, so \o/. Friday: Mostly continued work on oslo.upgradecheck in between some non-Oslo discussions. I think that's it. If I missed anything or you have questions/comments feel free to reply. Thanks. -Ben From lbragstad at gmail.com Fri Sep 28 22:23:30 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 28 Sep 2018 17:23:30 -0500 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: <20180928203318.GA3769@sm-workstation> References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> <20180928203318.GA3769@sm-workstation> Message-ID: Alright - I've worked up the majority of what we have in this thread and proposed a documentation patch for oslo.policy [0]. I think we're at the point where we can finish the rest of this discussion in gerrit if folks are ok with that. [0] https://review.openstack.org/#/c/606214/ On Fri, Sep 28, 2018 at 3:33 PM Sean McGinnis wrote: > On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote: > > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki > wrote: > > > > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > > > wrote: > > > > > > > > Ideally I would like to see it in the form of least specific to most > > > specific. But more importantly in a way that there is no additional > > > delimiters between the service type and the resource. Finally, I do not > > > like the change of plurality depending on action type. > > > > > > > > I propose we consider > > > > > > > > ::[:] > > > > > > > > Example for keystone (note, action names below are strictly examples > I > > > am fine with whatever form those actions take): > > > > identity:projects:create > > > > identity:projects:delete > > > > identity:projects:list > > > > identity:projects:get > > > > > > > > It keeps things simple and consistent when you're looking through > > > overrides / defaults. > > > > --Morgan > > > +1 -- I think the ordering if `resource` comes before > > > `action|subaction` will be more clean. > > > > > > > Great idea. This is looking better and better. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Sep 28 23:16:56 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Sat, 29 Sep 2018 00:16:56 +0100 (BST) Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <1538145718.22269.0@smtp.office365.com> Message-ID: On Fri, 28 Sep 2018, melanie witt wrote: > I'm concerned about a lot of repetition here and maintenance headache for > operators. That's where the thoughts about whether we should provide > something like a key-value construct to API callers where they can instead > say: > > * OWNER=CINDER > * RAID=10 > * NUMA_CELL=0 > > for each resource provider. > > If I'm off base with my example, please let me know. I'm not a placement > expert. > > Anyway, I hope that gives an idea of what I'm thinking about in this > discussion. I agree we need to pick a direction and go with it. I'm just > trying to look out for the experience operators are going to be using this > and maintaining it in their deployments. Despite saying "let's never do this" with regard to having formal support for key/values in placement, if we did choose to do it (if that's what we chose, I'd live with it), when would we do it? We have a very long backlog of features that are not yet done. I believe (I hope obviously) that we will be able to accelerate placement's velocity with it being extracted, but that won't be enough to suddenly be able to do quickly do all the things we have on the plate. Are we going to make people wait for some unknown amount of time, in the meantime? While there is a grammar that could do some of these things? Unless additional resources come on the scene I don't think is either feasible or reasonable for us to considering doing any model extending at this time (irrespective of the merit of the idea). In some kind of weird belief way I'd really prefer we keep the grammar placement exposes simple, because my experience with HTTP APIs strongly suggests that's very important, and that experience is effectively why I am here, but I have no interest in being a fundamentalist about it. We should argue about it strongly to make sure we get the right result, but it's not a huge deal either way. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mnaser at vexxhost.com Sat Sep 29 00:23:45 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 28 Sep 2018 20:23:45 -0400 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <1538145718.22269.0@smtp.office365.com> Message-ID: On Fri, Sep 28, 2018 at 7:17 PM Chris Dent wrote: > > On Fri, 28 Sep 2018, melanie witt wrote: > > > I'm concerned about a lot of repetition here and maintenance headache for > > operators. That's where the thoughts about whether we should provide > > something like a key-value construct to API callers where they can instead > > say: > > > > * OWNER=CINDER > > * RAID=10 > > * NUMA_CELL=0 > > > > for each resource provider. > > > > If I'm off base with my example, please let me know. I'm not a placement > > expert. > > > > Anyway, I hope that gives an idea of what I'm thinking about in this > > discussion. I agree we need to pick a direction and go with it. I'm just > > trying to look out for the experience operators are going to be using this > > and maintaining it in their deployments. > > Despite saying "let's never do this" with regard to having formal > support for key/values in placement, if we did choose to do it (if > that's what we chose, I'd live with it), when would we do it? We > have a very long backlog of features that are not yet done. I > believe (I hope obviously) that we will be able to accelerate > placement's velocity with it being extracted, but that won't be > enough to suddenly be able to do quickly do all the things we have > on the plate. > > Are we going to make people wait for some unknown amount of time, > in the meantime? While there is a grammar that could do some of > these things? > > Unless additional resources come on the scene I don't think is > either feasible or reasonable for us to considering doing any model > extending at this time (irrespective of the merit of the idea). > > In some kind of weird belief way I'd really prefer we keep the > grammar placement exposes simple, because my experience with HTTP > APIs strongly suggests that's very important, and that experience is > effectively why I am here, but I have no interest in being a > fundamentalist about it. We should argue about it strongly to make > sure we get the right result, but it's not a huge deal either way. Is there a spec up for this should anyone want to implement it? > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent__________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From soulxu at gmail.com Sat Sep 29 01:56:43 2018 From: soulxu at gmail.com (Alex Xu) Date: Sat, 29 Sep 2018 09:56:43 +0800 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> Message-ID: Chris Dent 于2018年9月29日周六 上午1:19写道: > On Fri, 28 Sep 2018, Jay Pipes wrote: > > > On 09/28/2018 09:25 AM, Eric Fried wrote: > >> It's time somebody said this. > > Yes, a useful topic, I think. > ++, I'm interesting this topic also, since it confuses me for a long time... > > >> Every time we turn a corner or look under a rug, we find another use > >> case for provider traits in placement. But every time we have to have > >> the argument about whether that use case satisfies the original > >> "intended purpose" of traits. > >> > >> That's only reason I've ever been able to glean: that it (whatever "it" > >> is) wasn't what the architects had in mind when they came up with the > >> idea of traits. > > > > Don't pussyfoot around things. It's me you're talking about, Eric. You > could > > just ask me instead of passive-aggressively posting to the list like > this. > > It's not just you. Ed and I have also expressed some fairly strong > statement about how traits are "supposed" to be used and I would > guess that from Eric's perspective all three of us (amongst others) > have some form of architectural influence. Since it takes a village > and all that. > > > They aren't arbitrary. They are there for a reason: a trait is a boolean > > capability. It describes something that either a provider is capable of > > supporting or it isn't. > > This is somewhat (maybe even only slightly) different from what I > think the definition of a trait is, and that nuance may be relevant. > > I describe a trait as a "quality that a resource provider has" (the > car is blue). This contrasts with a resource class which is a > "quantity that a resource provider has" (the car has 4 doors). > > Yes, this is what I'm thinking when I propose the Trait. Basically, I'm trying to match two points in the proposal: #1 we need qualitative of resource, #2 we don't want another metadata API, since metadata API isn't discoverable and wild place, people put anything to it. Nobody knows what metadata available in the code except deep into the code. For #1, just as Chris said. For #2, You have to create Trait before using it, and we have API to query traits, make it discoverable in the API. And standard trait make its naming has rule, then as Jay suggested, we have os-traits library to store all the standard traits. But we have to have custom trait, since there have use-case for managing resource out of OpenStack. > Our implementation is pretty much exactly that ^. We allow > clients to ask "give me things that have qualities x, y, z, not > qualities a, b, c, and quanities of G of 5 and H of 7". > > Add in aggregates and we have exactly what you say: > > > * Does the provider have *capacity* for the requested resources? > > * Does the provider have the required (or forbidden) *capabilities*? > > * Does the provider belong to some group? > > The nuance of difference is that your description of *capabilities* > seems more narrow than my description of *qualities* (aka > characteristics). You've got something fairly specific in mind, as a > way of constraining the profusion of noise that has happened with > how various kinds of information about resources of all sorts is > managed in OpenStack, as you describe in your message. > > I do not think it should be placement's job to control that noise. > It should be placement's job to provide a very strict contract about > what you can do with a trait: > > * create it, if necessary > * assign it to one or more resource providers > * ask for providers that either have it > * ... or do not have it > > That's all. Placement _code_ should _never_ be aware of the value of > a trait (except for the magical MISC_SHARES...). It should never > become possible to regex on traits or do comparisons > (required= ++ > > > If we want to add further constraints to the placement allocation > candidates > > request that ask things like: > > > > * Does the provider have version 1.22.61821 of BIOS firmware from > Marvell > > installed on it? > > That's a quality of the provider in a moment. > > > * Does the provider support an FPGA that has had an OVS program flashed > to it > > in the last 20 days? > > If you squint, so is this. > > > * Does the provider belong to physical network "corpnet" and also > support > > creation of virtual NICs of type either "DIRECT" or "NORMAL"? > > And these. > > But at least some of them are dynamic rather than some kind of > platonic ideal associated with the resource provider. > > I don't think placement should be concerned about temporal aspects > of traits. If we can't write a web service that can handle setting > lots of traits every second of every day, we should go home. If > clients of placement want to set weird traits, more power to them. > > However, if clients of placement (such as nova) which are being the > orchestrator of resource providers manipulated by multiple systems > (neutron, cinder, ironic, cyborg, etc) wish to set some constraints > on how and what traits can do and mean, then that is up to them. > > nova-scheduler is the thing that is doing `GET > /allocation_candidates` for those multiple system. It presumably > should have some say in what traits it is willing to express and > use. > > But the placement service doesn't and shouldn't care. > > > Then we should add a data model that allow providers to be decorated > with > > key/value (or more complex than key/value) information where we can > query for > > those kinds of constraints without needing to encode all sorts of > non-binary > > bits of information into a capability string. > > Let's never do this, please. The three capabilities (ha!) of > placement that you listed above ("Does the...") are very powerful as > is and have a conceptual integrity that's really quite awesome. I > think keeping it contained and constrained in very "simple" concepts > like that was stroke of genius you (Jay) made and I'd hope we can > keep it clean like that. > > If we weren't a multiple-service oriented system, and instead had > some kind of k8s-like etcd-like > keeper-of-all-the-info-about-everything, then sure, having what we > currently model as resource providers be a giant blob of metadata > (with quantities, qualitiies, and key-values) that is an authority > for the entire system might make some kind of sense. > > But we don't. If we wanted to migrate to having something like that, > using placement as the trojan horse for such a change, either with > intent or by accident, would be unfortunate. > > > Propose such a thing and I'll gladly support it. But I won't support > > bastardizing the simple concept of a boolean capability just because we > don't > > want to change the API or database schema. > > For me, it is not a matter of not wanting to change the API or the > database schema. It's about not wanting to expand the concepts, and > thus the purpose, of the system. It's about wanting to keep focus > and functionality narrow so we can have a target which is "maturity" > and know when we're there. > > My summary: Traits are symbols that are 255 characters long that are > associated with a resource provider. It's possible to query for > resource providers that have or do not have a specific trait. This > has the effect of making the meaning of a trait a descriptor of the > resource provider. What the descriptor signifies is up to the thing > creating and using the resource provider, not placement. We need to > harden that contract and stick to it. Placement is like a common > carrier, it doesn't care what's in the box. > > /me cues brad pitt > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: > @anticdent__________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Sat Sep 29 02:01:34 2018 From: soulxu at gmail.com (Alex Xu) Date: Sat, 29 Sep 2018 10:01:34 +0800 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: <543894f0-8ee2-68b8-ac02-898a5359a2c6@gmail.com> References: <1538145718.22269.0@smtp.office365.com> <543894f0-8ee2-68b8-ac02-898a5359a2c6@gmail.com> Message-ID: Jay Pipes 于2018年9月29日周六 上午5:51写道: > On 09/28/2018 04:42 PM, Eric Fried wrote: > > On 09/28/2018 09:41 AM, Balázs Gibizer wrote: > >> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried wrote: > >>> It's time somebody said this. > >>> > >>> Every time we turn a corner or look under a rug, we find another use > >>> case for provider traits in placement. But every time we have to have > >>> the argument about whether that use case satisfies the original > >>> "intended purpose" of traits. > >>> > >>> That's only reason I've ever been able to glean: that it (whatever "it" > >>> is) wasn't what the architects had in mind when they came up with the > >>> idea of traits. We're not even talking about anything that would > require > >>> changes to the placement API. Just, "Oh, that's not a *capability* - > >>> shut it down." > >>> > >>> Bubble wrap was originally intended as a textured wallpaper and a > >>> greenhouse insulator. Can we accept the fact that traits have (many, > >>> many) uses beyond marking capabilities, and quit with the arbitrary > >>> restrictions? > >> > >> How far are we willing to go? Does an arbitrary (key: value) pair > >> encoded in a trait name like key_`str(value)` (e.g. CURRENT_TEMPERATURE: > >> 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in > >> placement? > > > > Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but > > TEMPERATURE_ is not. > > That's correct, because you're encoding >1 piece of information into the > single string (the fact that it's a temperature *and* the value of that > temperature are the two pieces of information encoded into the single > string). > > Now that there's multiple pieces of information encoded in the string > the reader of the trait string needs to know how to decode those bits of > information, which is exactly what we're trying to avoid doing (because > we can see from the ComputeCapabilitiesFilter, the extra_specs mess, and > the giant hairball that is the NUMA and CPU pinning "metadata requests" > how that turns out). > May I understand the one of Jay's complain is about metadata API undiscoverable? That is extra_spec mess and ComputeCapabilitiesFilter mess? Another complain is about the information in the string. Agree with that TEMPERATURE_ is terriable. I prefer the way I used in nvdimm proposal now, I don't want to use Trait NVDIMM_DEVICE_500GB, NVDIMM_DEVICE_1024GB. I want to put them into the different resource provider, and use min_size, max_size limit the allocation. And the user will request with resource class like RC_NVDIMM_GB=512. > > > This thread isn't about setting these parameters; it's about getting > > us to a point where we can discuss a question just like this one > > without running up against: > > > "That's a hard no, because you shouldn't encode key/value pairs in > traits." > > > > "Oh, why's that?" > > > > "Because that's not what we intended when we created traits." > > > > "But it would work, and the alternatives are way harder." > > > > "-1" > > > > "But..." > > > > "-I > > I believe I've articulated a number of times why traits should remain > unary pieces of information, and not just said "because that's what we > intended when we created traits". > > I'm tough on this because I've seen the garbage code and unmaintainable > mess that not having structurally sound data modeling concepts and > information interpretation rules leads to in Nova and I don't want to > encourage any more of it. > > -jay > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Sat Sep 29 02:15:28 2018 From: soulxu at gmail.com (Alex Xu) Date: Sat, 29 Sep 2018 10:15:28 +0800 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <1538145718.22269.0@smtp.office365.com> <543894f0-8ee2-68b8-ac02-898a5359a2c6@gmail.com> Message-ID: Sorry for append another email for something I missed to say. Alex Xu 于2018年9月29日周六 上午10:01写道: > > > Jay Pipes 于2018年9月29日周六 上午5:51写道: > >> On 09/28/2018 04:42 PM, Eric Fried wrote: >> > On 09/28/2018 09:41 AM, Balázs Gibizer wrote: >> >> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried >> wrote: >> >>> It's time somebody said this. >> >>> >> >>> Every time we turn a corner or look under a rug, we find another use >> >>> case for provider traits in placement. But every time we have to have >> >>> the argument about whether that use case satisfies the original >> >>> "intended purpose" of traits. >> >>> >> >>> That's only reason I've ever been able to glean: that it (whatever >> "it" >> >>> is) wasn't what the architects had in mind when they came up with the >> >>> idea of traits. We're not even talking about anything that would >> require >> >>> changes to the placement API. Just, "Oh, that's not a *capability* - >> >>> shut it down." >> >>> >> >>> Bubble wrap was originally intended as a textured wallpaper and a >> >>> greenhouse insulator. Can we accept the fact that traits have (many, >> >>> many) uses beyond marking capabilities, and quit with the arbitrary >> >>> restrictions? >> >> >> >> How far are we willing to go? Does an arbitrary (key: value) pair >> >> encoded in a trait name like key_`str(value)` (e.g. >> CURRENT_TEMPERATURE: >> >> 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in >> >> placement? >> > >> > Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but >> > TEMPERATURE_ is not. >> >> That's correct, because you're encoding >1 piece of information into the >> single string (the fact that it's a temperature *and* the value of that >> temperature are the two pieces of information encoded into the single >> string). >> >> Now that there's multiple pieces of information encoded in the string >> the reader of the trait string needs to know how to decode those bits of >> information, which is exactly what we're trying to avoid doing (because >> we can see from the ComputeCapabilitiesFilter, the extra_specs mess, and >> the giant hairball that is the NUMA and CPU pinning "metadata requests" >> how that turns out). >> > > May I understand the one of Jay's complain is about metadata API > undiscoverable? That is extra_spec mess and ComputeCapabilitiesFilter mess? > If yes, then we resolve the discoverable by the "/Traits" API. > > Another complain is about the information in the string. Agree with that > TEMPERATURE_ is terriable. > I prefer the way I used in nvdimm proposal now, I don't want to use Trait > NVDIMM_DEVICE_500GB, NVDIMM_DEVICE_1024GB. I want to put them into the > different resource provider, and use min_size, max_size limit the > allocation. And the user will request with resource class like > RC_NVDIMM_GB=512. > TEMPERATURE_ is wrong, as the way using it. But I don't thing the version of BIOS is wrong, I don't expect the end user to ready the information from the trait directly, there should document from the admin to explain more. The version of BIOS should be a thing understand by the admin, then it is enough. > >> >> > This thread isn't about setting these parameters; it's about getting >> > us to a point where we can discuss a question just like this one >> > without running up against: > >> > "That's a hard no, because you shouldn't encode key/value pairs in >> traits." >> > >> > "Oh, why's that?" >> > >> > "Because that's not what we intended when we created traits." >> > >> > "But it would work, and the alternatives are way harder." >> > >> > "-1" >> > >> > "But..." >> > >> > "-I >> >> I believe I've articulated a number of times why traits should remain >> unary pieces of information, and not just said "because that's what we >> intended when we created traits". >> >> I'm tough on this because I've seen the garbage code and unmaintainable >> mess that not having structurally sound data modeling concepts and >> information interpretation rules leads to in Nova and I don't want to >> encourage any more of it. >> >> -jay >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Sat Sep 29 09:03:14 2018 From: aj at suse.com (Andreas Jaeger) Date: Sat, 29 Sep 2018 11:03:14 +0200 Subject: [openstack-dev] [docs][charms] Updating Deployment Guides indices of published pages In-Reply-To: References: Message-ID: <3dead7d4-a505-6930-8553-4968cac60912@suse.com> On 9/27/18 10:04 AM, Frode Nordahl wrote: > Hello docs team, > > What would it take to re-generate the indices for Deployment Guides on > the published Queens [0] and Rocky [1] docs pages? > > It seems that the charms project has missed the index for those releases > due to some issues which now has been resolved [2].  We would love to > reclaim our space in the list! > > 0: https://docs.openstack.org/queens/deploy/ > 1: https://docs.openstack.org/rocky/deploy/ > 2: > https://review.openstack.org/#/q/topic:enable-openstack-manuals-rocky-latest+(status:open+OR+status:merged) We found the problem - you updated the build job to use bionic but nobody updated the publish one. This is fixed now since https://review.openstack.org/#/c/606147/ is merged. All should be fine again, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From mark at stackhpc.com Sat Sep 29 09:51:20 2018 From: mark at stackhpc.com (Mark Goddard) Date: Sat, 29 Sep 2018 10:51:20 +0100 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <1538145718.22269.0@smtp.office365.com> <543894f0-8ee2-68b8-ac02-898a5359a2c6@gmail.com> Message-ID: To add some context around what I suspect is the reason for the most recent incarnation of this debate, many Ironic users have a requirement to be able to influence the configuration of a server at deploy time, beyond the existing supported mechanisms. The classic example is hardware RAID - the ability to support workloads with different requirements is important, since if you're paying for bare metal cloud resources you'll want to make sure you're getting the most out of them. Another example that comes up is hyperthreading - often this is disabled for HPC workloads but enabled for HTC. We've had a plan to support deploy-time configuration that has existed for a few cycles. It began with adding support for traits [1] in Queens, and continued with the deploy steps framework [2] in Rocky. At the Stein PTG we had a lot of support [3] for finishing the job by implementing the deploy templates [4] spec that is currently in review. At a very high level, deploy templates allow us to map a reuired trait specified on a flavor to a set of deploy steps in ironic. These deploy steps are based on the existing cleaning steps framework that has existed in ironic for many releases, and should feel familiar to users of ironic. This scheme is conceptually quite simple, which I like. After a negative review on the spec from Jay on Thursday, I added a design to the alternatives section of the spec that I thought might align better with his view of the world. Essentially, decouple the scheduling and configuration - flavors may specify required traits as they can today, but also a more explicit list of names or UUIDs of ironic deploy templates. I'm still not sure how I feel about this. Architecturally it's cleaner, and is more flexible but from a usability perspective feels a little clunky. There was a discussion [5] in ironic's IRC yesterday that I missed, in which Jay offered to write up an alternative spec that uses glance metadata [6]. There were some concerns about adding a hard requirement on glance for the standalone use case, but we may be able to provide an alternative solution analogous to manual cleaning that fills that gap. I'm certainly interested to see what Jay comes up with. If there is a better way of doing this, I'm all ears. That said, this is something the ironic community has been wanting for a long time now, and I can't see us waiting for a multi-cycle feature to land in nova, given that deploy templates currently requires no changes in nova. [1] http://specs.openstack.org/openstack/ironic-specs/specs/10.1/node-traits.html [2] https://specs.openstack.org/openstack/ironic-specs/specs/11.1/deployment-steps-framework.html [3] https://etherpad.openstack.org/p/ironic-stein-ptg-goals [4] https://review.openstack.org/#/c/504952/ [5] http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2018-09-28.log.html#t2018-09-28T14:22:57 [6] https://docs.openstack.org/glance/pike/user/metadefs-concepts.html On Sat, 29 Sep 2018 at 03:15, Alex Xu wrote: > Sorry for append another email for something I missed to say. > > Alex Xu 于2018年9月29日周六 上午10:01写道: > >> >> >> Jay Pipes 于2018年9月29日周六 上午5:51写道: >> >>> On 09/28/2018 04:42 PM, Eric Fried wrote: >>> > On 09/28/2018 09:41 AM, Balázs Gibizer wrote: >>> >> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried >>> wrote: >>> >>> It's time somebody said this. >>> >>> >>> >>> Every time we turn a corner or look under a rug, we find another use >>> >>> case for provider traits in placement. But every time we have to have >>> >>> the argument about whether that use case satisfies the original >>> >>> "intended purpose" of traits. >>> >>> >>> >>> That's only reason I've ever been able to glean: that it (whatever >>> "it" >>> >>> is) wasn't what the architects had in mind when they came up with the >>> >>> idea of traits. We're not even talking about anything that would >>> require >>> >>> changes to the placement API. Just, "Oh, that's not a *capability* - >>> >>> shut it down." >>> >>> >>> >>> Bubble wrap was originally intended as a textured wallpaper and a >>> >>> greenhouse insulator. Can we accept the fact that traits have (many, >>> >>> many) uses beyond marking capabilities, and quit with the arbitrary >>> >>> restrictions? >>> >> >>> >> How far are we willing to go? Does an arbitrary (key: value) pair >>> >> encoded in a trait name like key_`str(value)` (e.g. >>> CURRENT_TEMPERATURE: >>> >> 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see >>> in >>> >> placement? >>> > >>> > Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but >>> > TEMPERATURE_ is not. >>> >>> That's correct, because you're encoding >1 piece of information into the >>> single string (the fact that it's a temperature *and* the value of that >>> temperature are the two pieces of information encoded into the single >>> string). >>> >>> Now that there's multiple pieces of information encoded in the string >>> the reader of the trait string needs to know how to decode those bits of >>> information, which is exactly what we're trying to avoid doing (because >>> we can see from the ComputeCapabilitiesFilter, the extra_specs mess, and >>> the giant hairball that is the NUMA and CPU pinning "metadata requests" >>> how that turns out). >>> >> >> May I understand the one of Jay's complain is about metadata API >> undiscoverable? That is extra_spec mess and ComputeCapabilitiesFilter mess? >> > > If yes, then we resolve the discoverable by the "/Traits" API. > > >> >> Another complain is about the information in the string. Agree with that >> TEMPERATURE_ is terriable. >> I prefer the way I used in nvdimm proposal now, I don't want to use Trait >> NVDIMM_DEVICE_500GB, NVDIMM_DEVICE_1024GB. I want to put them into the >> different resource provider, and use min_size, max_size limit the >> allocation. And the user will request with resource class like >> RC_NVDIMM_GB=512. >> > > TEMPERATURE_ is wrong, as the way using it. But I don't > thing the version of BIOS is wrong, I don't expect the end user to ready > the information from the trait directly, there should document from the > admin to explain more. The version of BIOS should be a thing understand by > the admin, then it is enough. > > >> >>> >>> > This thread isn't about setting these parameters; it's about getting >>> > us to a point where we can discuss a question just like this one >>> > without running up against: > >>> > "That's a hard no, because you shouldn't encode key/value pairs in >>> traits." >>> > >>> > "Oh, why's that?" >>> > >>> > "Because that's not what we intended when we created traits." >>> > >>> > "But it would work, and the alternatives are way harder." >>> > >>> > "-1" >>> > >>> > "But..." >>> > >>> > "-I >>> >>> I believe I've articulated a number of times why traits should remain >>> unary pieces of information, and not just said "because that's what we >>> intended when we created traits". >>> >>> I'm tough on this because I've seen the garbage code and unmaintainable >>> mess that not having structurally sound data modeling concepts and >>> information interpretation rules leads to in Nova and I don't want to >>> encourage any more of it. >>> >>> -jay >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Sat Sep 29 10:00:27 2018 From: mark at stackhpc.com (Mark Goddard) Date: Sat, 29 Sep 2018 11:00:27 +0100 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <1538145718.22269.0@smtp.office365.com> Message-ID: On Fri, 28 Sep 2018 at 22:07, melanie witt wrote: > On Fri, 28 Sep 2018 15:42:23 -0500, Eric Fried wrote: > > On 09/28/2018 09:41 AM, Balázs Gibizer wrote: > >> > >> > >> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried wrote: > >>> It's time somebody said this. > >>> > >>> Every time we turn a corner or look under a rug, we find another use > >>> case for provider traits in placement. But every time we have to have > >>> the argument about whether that use case satisfies the original > >>> "intended purpose" of traits. > >>> > >>> That's only reason I've ever been able to glean: that it (whatever "it" > >>> is) wasn't what the architects had in mind when they came up with the > >>> idea of traits. We're not even talking about anything that would > require > >>> changes to the placement API. Just, "Oh, that's not a *capability* - > >>> shut it down." > >>> > >>> Bubble wrap was originally intended as a textured wallpaper and a > >>> greenhouse insulator. Can we accept the fact that traits have (many, > >>> many) uses beyond marking capabilities, and quit with the arbitrary > >>> restrictions? > >> > >> How far are we willing to go? Does an arbitrary (key: value) pair > >> encoded in a trait name like key_`str(value)` (e.g. CURRENT_TEMPERATURE: > >> 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in > >> placement? > > > > Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but > > TEMPERATURE_ is not. This thread isn't about setting > > these parameters; it's about getting us to a point where we can discuss > > a question just like this one without running up against: > > > > "That's a hard no, because you shouldn't encode key/value pairs in > traits." > > > > "Oh, why's that?" > > > > "Because that's not what we intended when we created traits." > > > > "But it would work, and the alternatives are way harder." > > > > "-1" > > > > "But..." > > > > "-1" > I think it's not so much about the intention when traits were created > and more about what UX callers of the API are left with, if we were to > recommend representing everything with traits and not providing another > API for key-value use cases. We need to think about what the maintenance > of their deployments will look like if traits are the only tool we provide. > > I get that we don't want to put artificial restrictions on how API > callers can and can't use the traits API, but will they be left with a > manageable experience if that's all that's available? > > I don't have time right now to come up with a really great example, but > I'm thinking along the lines of, can this get out of hand (a la "flavor > explosion") for an operator using traits to model what their compute > hosts can do? > > Please forgive the oversimplified example I'm going to try to use to > illustrate my concern: > > We all agree we can have traits for resource providers like: > > * HAS_SSD > * HAS_GPU > * HAS_WINDOWS > > But things get less straightforward when we think of traits like: > > * HAS_OWNER_CINDER > * HAS_OWNER_NEUTRON > * HAS_OWNER_CYBORG > * HAS_RAID_0 > * HAS_RAID_1 > * HAS_RAID_5 > * HAS_RAID_6 > * HAS_RAID_10 > I think the numbers are a red herring here. RAID levels include a limited set of combinations, of which only a handful are frequently used. It's not the same as the temperature example, which is a continuous range of numbers. That said, a key:value encoding could work well for RAID. * HAS_NUMA_CELL_0 > * HAS_NUMA_CELL_1 > * HAS_NUMA_CELL_2 > * HAS_NUMA_CELL_3 > > I'm concerned about a lot of repetition here and maintenance headache > for operators. That's where the thoughts about whether we should provide > something like a key-value construct to API callers where they can > instead say: > > * OWNER=CINDER > * RAID=10 > * NUMA_CELL=0 > > for each resource provider. > > If I'm off base with my example, please let me know. I'm not a placement > expert. > > Anyway, I hope that gives an idea of what I'm thinking about in this > discussion. I agree we need to pick a direction and go with it. I'm just > trying to look out for the experience operators are going to be using > this and maintaining it in their deployments. > > Cheers, > -melanie > > > > > > > > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Sat Sep 29 12:50:31 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sat, 29 Sep 2018 14:50:31 +0200 Subject: [openstack-dev] [os-upstream-institute] Find a slot for a meeting to discuss - ACTION NEEDED In-Reply-To: <313CAE1B-CCBB-426F-976B-0320B2273BA1@gmail.com> References: <313CAE1B-CCBB-426F-976B-0320B2273BA1@gmail.com> Message-ID: <948BBE83-6631-4CCC-A558-DEFDA6149C41@gmail.com> Hi Training Team, Based on the votes on the Doodle poll below we will have our ad-hoc meeting __next Friday (October 5) 1600 UTC__. Hangouts link for the call: https://hangouts.google.com/call/BKnvu7e72uB_Z-QDHDF2AAEI If you’re available for the Berlin training as mentor and haven’t signed up on the wiki yet please do so: https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute_Occasions#Berlin_Crew Please let me know if you have any questions. Thanks, Ildikó > On 2018. Sep 23., at 14:29, Ildiko Vancsa wrote: > > Hi Training Team, > > With the new workshop style training format that is utilizing the Contributor Guide we have less work with the training material side and we have less items to discuss in the form of regular meetings. > > However, we have a few items to go through before the upcoming training in Berlin to make sure we are fully prepared. We also have Florian from City Network working on the online version of the training that he would like to discuss with the team. > > As the current meeting slot is very inconvenient in Europe and Asia as well I put together a Doodle poll to try to find a better slot if we can. As we have people all around the globe all the slots are inconvenient to a subset of the team, but if we can agree on one for one meeting it would be great. > > Please vote on the poll as soon as possible: https://doodle.com/poll/yrp9anbb7weaun4h > > When we have the full list of mentors for the Berlin training I will send out a separate poll for one or two prep calls. If you’re available for the training and not signed up on the wiki yet please sign up: https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute_Occasions#Berlin_Crew > > Please let me know if you have any questions. > > Thanks and Best Regards, > Ildikó > > From doug at doughellmann.com Sat Sep 29 15:27:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sat, 29 Sep 2018 11:27:11 -0400 Subject: [openstack-dev] [goal][python3] week 7 update In-Reply-To: References: Message-ID: Doug Hellmann writes: > Doug Hellmann writes: > >> == Things We Learned This Week == >> >> When we updated the tox.ini settings for jobs like pep8 and release >> notes early in the Rocky session we only touched some of the official >> repositories. I'll be working on making a list of the ones we missed so >> we can update them by the end of Stein. > > I see quite a few repositories with tox settings out of date (about 350, > see below). Given the volume, I'm going to prepare the patches and > propose them a few at a time over the next couple of weeks. Zuul looked bored this morning so I went ahead and proposed a few of the larger batches of these changes for the Charms, OpenStack Ansible, and Horizon teams. TripleO also has quite a few, but since we know the gate is unstable I will hold off on those for now. There will be more patches when there is CI capacity again. Doug From jaypipes at gmail.com Sat Sep 29 15:40:44 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Sat, 29 Sep 2018 11:40:44 -0400 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: <7b2ae14e-5f3d-ff60-3ebe-8b8c62ee5994@fried.cc> References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> <7b2ae14e-5f3d-ff60-3ebe-8b8c62ee5994@fried.cc> Message-ID: <01d48897-d486-6ab0-2f64-a2093e4157dc@gmail.com> On 09/28/2018 04:36 PM, Eric Fried wrote: > So here it is. Two of the top influencers in placement, one saying we > shouldn't overload traits, the other saying we shouldn't add a primitive > that would obviate the need for that. Historically, this kind of > disagreement seems to result in an impasse: neither thing happens and > those who would benefit are forced to find a workaround or punt. > Frankly, I don't particularly care which way we go; I just want to be > able to do the things. I don't think that's a fair statement. You absolutely *do* care which way we go. You want to encode multiple bits of information into a trait string -- such as "PCI_ADDRESS_01_AB_23_CD" -- and leave it up to the caller to have to understand that this trait string has multiple bits of information encoded in it (the fact that it's a PCI device and that the PCI device is at 01_AB_23_CD). You don't see a problem encoding these variants inside a string. Chris doesn't either. I *do* see a problem with it, based on my experience in Nova where this kind of thing leads to ugly, unmaintainable, and incomprehensible code as I have pointed to in previous responses. Furthermore, your point isn't that "you just want to be able to do the things". Your point (and the point of others, from Cyborg and Ironic) is that you want to be able to use placement to pass various bits of information to an instance, and placement wasn't designed for that purpose. Nova was. So, instead of working out a solution with the Nova team for passing configuration data about an instance, the proposed solution is instead to hack/encode multiple bits of information into a trait string. This proposed solution is seen as a way around having to work out a more appropriate solution that has Nova pass that configuration data (as is appropriate, since nova is the project that manages instances) to the virt driver or generic device manager (i.e. Cyborg) before the instance spawns. I'm working on a spec that will describe a way for the user to instruct Nova to pass configuration data to the virt driver (or device manager) before instance spawn. This will have nothing to do with placement or traits, since this configuration data is not modeling scheduling and placement decisions. I hope to have that spec done by Monday so we can discuss on the spec. Best, -jay From ed at leafe.com Sat Sep 29 15:50:09 2018 From: ed at leafe.com (Ed Leafe) Date: Sat, 29 Sep 2018 10:50:09 -0500 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: <01d48897-d486-6ab0-2f64-a2093e4157dc@gmail.com> References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> <7b2ae14e-5f3d-ff60-3ebe-8b8c62ee5994@fried.cc> <01d48897-d486-6ab0-2f64-a2093e4157dc@gmail.com> Message-ID: <5E773402-AF6D-4F2E-BDA6-68C54A732280@leafe.com> On Sep 29, 2018, at 10:40 AM, Jay Pipes wrote: > >> So here it is. Two of the top influencers in placement, one saying we >> shouldn't overload traits, the other saying we shouldn't add a primitive >> that would obviate the need for that. Historically, this kind of >> disagreement seems to result in an impasse: neither thing happens and >> those who would benefit are forced to find a workaround or punt. >> Frankly, I don't particularly care which way we go; I just want to be >> able to do the things. > > I don't think that's a fair statement. You absolutely *do* care which way we go. You want to encode multiple bits of information into a trait string -- such as "PCI_ADDRESS_01_AB_23_CD" -- and leave it up to the caller to have to understand that this trait string has multiple bits of information encoded in it (the fact that it's a PCI device and that the PCI device is at 01_AB_23_CD). > > You don't see a problem encoding these variants inside a string. Chris doesn't either. > > I *do* see a problem with it, based on my experience in Nova where this kind of thing leads to ugly, unmaintainable, and incomprehensible code as I have pointed to in previous responses. I think that there is huge difference between the Placement service stating "this is how you use this service" and actively preventing others from doing dumb things with Placement. If we, as a team, tell others that it is OK to manage state, or use a trait name to encode multiple bits of information, then others will be more likely to do just that, and end up with an unreadable mess around their part of the code that works with placement. The result will be a perception among others along the lines of "placement sucks". If we state clearly that this is not a good way to work with Placement, and they do so anyway, well, that's on them. So we shouldn't enforce anything about trait names except the custom namespace and the length. If other service want to be overly clever and try to overload a trait name, it's up to them to deal with the resulting mess. But in no way should we *ever* encourage, even tacitly, this approach. -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From ed at leafe.com Sat Sep 29 16:01:35 2018 From: ed at leafe.com (Ed Leafe) Date: Sat, 29 Sep 2018 11:01:35 -0500 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> Message-ID: <329121E4-8C50-4005-A25A-061919CF9A2E@leafe.com> On Sep 28, 2018, at 12:19 PM, Chris Dent wrote: > > I don't think placement should be concerned about temporal aspects > of traits. If we can't write a web service that can handle setting > lots of traits every second of every day, we should go home. If > clients of placement want to set weird traits, more power to them. > > However, if clients of placement (such as nova) which are being the > orchestrator of resource providers manipulated by multiple systems > (neutron, cinder, ironic, cyborg, etc) wish to set some constraints > on how and what traits can do and mean, then that is up to them. This. It is up to the clients to determine how to use Placement. But it is up to Placement to give guidance as to how to best use it. If a client wants to hack trait names, then they certainly can, and it might work out just fine. > On Sep 28, 2018, at 8:25 AM, Eric Fried wrote: > > Bubble wrap was originally intended as a textured wallpaper and a > greenhouse insulator. Can we accept the fact that traits have (many, > many) uses beyond marking capabilities, and quit with the arbitrary > restrictions? The crux here is what one considers "arbitrary". If we as a project state "sure, go ahead and use this however you like", we are going to be complicit in the technical debt they accumulate, as Jay has referenced with regards to Nova's ComputeCapabilities filter. We want consumers of Placement to write good, solid code that doesn't encourage technical debt accumulation. We have no way to prevent bad decisions by others, but we should never document that such usages are fine. -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From mark.mielke at gmail.com Sat Sep 29 16:22:42 2018 From: mark.mielke at gmail.com (Mark Mielke) Date: Sat, 29 Sep 2018 12:22:42 -0400 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: <329121E4-8C50-4005-A25A-061919CF9A2E@leafe.com> References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> <329121E4-8C50-4005-A25A-061919CF9A2E@leafe.com> Message-ID: On Sat, Sep 29, 2018 at 12:02 PM Ed Leafe wrote: > On Sep 28, 2018, at 12:19 PM, Chris Dent wrote: > > I don't think placement should be concerned about temporal aspects > > of traits. If we can't write a web service that can handle setting > > lots of traits every second of every day, we should go home. If > > clients of placement want to set weird traits, more power to them. > > > > However, if clients of placement (such as nova) which are being the > > orchestrator of resource providers manipulated by multiple systems > > (neutron, cinder, ironic, cyborg, etc) wish to set some constraints > > on how and what traits can do and mean, then that is up to them. > It is up to the clients to determine how to use Placement. But it is up to > Placement to give guidance as to how to best use it. If a client wants to > hack trait names, then they certainly can, and it might work out just fine. > It is only up to the clients to determine how to use Placement, if the Placement team specifies this as an intent. It's very important for users of the system to be in alignment with the providers of the system to avoid surprise and complications. Yes, users can hang themselves with enough rope - but this is a reckless position to hold. Let us say that Placement and Nova evolve traits according to a roadmap that ends up causing the users to get stuck on an old release because they have no migration path because they used the API in a way that was never intended, and never acknowledged. I'm reading along with interest, and the main issue I see here is what Jay referred to, which is that there is a process to get well formulated ideas agreed upon and incorporated, and it does seem like this is a threat to do something that works without following this process, not because it is smart, but because it is possible. I have a similar dilemma on a different system which has to do with the proper use of labels in a system like Jira. Jira issue labels are intended to be more flexible and defined according to user requirements to meet needs not envisioned by the designers of the issue field schemes. They work well well for certain ad-hoc uses, but they had side effects and limitations which make them very poor for many uses. A key one is related to this discussion, in that you can't easily evaluate only part of a label, so if you did non-advisable things like to use labels instead of priority, and you had multiple label values to indicate each priority, you end up with rather silly evaluations like "label in ('priority_high', 'priority_medium', 'priority_low')". This gets more complex over time and ends up being a huge burden for whoever inherits such a system. Yes, it was to solve an original problem - but the smart person would have addressed this with the issue field scheme and would have prevented this technical debt burden from existing in the first place. I think you should push Jay to propose what he thinks a good solution would be, and start discussing these options. I'm interested in reading more about these options. I really don't like the idea of embedding data bits into a single string that effectively includes both a "key" and a "value" as my own experience suggests this is a very bad idea. Myself, I think "traits" are not necessarily boolean. I see "brown hair" as a human trait. However, if they are not boolean than this invites the need to evaluate with comparison operators, and this would morph into a much more complex system. The idea of passing configuration data in might solve a subset of the cases - and it might solve your subset. But, I do think there is a more general solution possible if the requirements warranted the development. I also want to add that I think traits should not be dynamic things that change from second to second. The idea of passing temperature information via traits sounded somewhat ridiculous to me. I think that might have been the intent of the original poster to present a ridiculous example and hope people understood. I hope nobody was taking it seriously. :-) -- Mark Mielke -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Sat Sep 29 20:35:28 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 29 Sep 2018 15:35:28 -0500 Subject: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full? Message-ID: <4f1bf485-f663-69ae-309e-ab9286e588e1@gmail.com> Nova, cinder and tempest run the nova-multiattach job in their check and gate queues. The job was added in Queens and was a specific job because we had to change the ubuntu cloud archive we used in Queens to get multiattach working. Since Rocky, devstack defaults to a version of the UCA that works for multiattach, so there isn't really anything preventing us from running the tempest multiattach tests in the integrated gate. The job tries to be as minimal as possible by only running tempest.api.compute.* tests, but it still means spinning up a new node and devstack for testing. Given the state of the gate recently, I'm thinking it would be good if we dropped the nova-multiattach job in Stein and just enable the multiattach tests in one of the other integrated gate jobs. I initially was just going to enable it in the nova-next job, but we don't run that on cinder or tempest changes. I'm not sure if tempest-full is a good place for this though since that job already runs a lot of tests and has been timing out a lot lately [1][2]. The tempest-slow job is another option, but cinder doesn't currently run that job (it probably should since it runs volume-related tests, including the only tempest tests that use encrypted volumes). Are there other ideas/options for enabling multiattach in another job that nova/cinder/tempest already use so we can drop the now mostly redundant nova-multiattach job? [1] http://status.openstack.org/elastic-recheck/#1686542 [2] http://status.openstack.org/elastic-recheck/#1783405 -- Thanks, Matt From miguel at mlavalle.com Sun Sep 30 00:20:20 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sat, 29 Sep 2018 19:20:20 -0500 Subject: [openstack-dev] [ironic][neutron] SmartNics with Ironic In-Reply-To: References: Message-ID: Hi, Yes, this also matches the recollection of the joint conversation in Denver. Please look at the "Ironic x-project discussion - Smartnics" section in http://lists.openstack.org/pipermail/openstack-dev/2018-September/135032.html Regards Miguel On Thu, Sep 27, 2018 at 1:31 PM Julia Kreger wrote: > Greetings everyone, > > Now that the PTG is over, I would like to go ahead and get the > specification that was proposed to ironic-specs updated to represent > the discussions that took place at the PTG. > > A few highlights from my recollection: > > * Ironic being the source of truth for the hardware configuration for > the neutron agent to determine where to push configuration to. This > would include the address and credential information (certificates > right?). > * The information required is somehow sent to neutron (possibly as > part of the binding profile, which we could send at each time port > actions are requested by Ironic.) > * The Neutron agent running on the control plane connects outbound to > the smartnic, using information supplied to perform the appropriate > network configuration. > * In Ironic, this would likely be a new network_interface driver > module, with some additional methods that help facilitate the > work-flow logic changes needed in each deploy_interface driver module. > * Ironic would then be informed or gain awareness that the > configuration has been completed and that the deployment can proceed. > (A different spec has been proposed regarding this.) > > I have submitted a forum session based upon this and the agreed upon > goal at the PTG was to have the ironic spec written up to describe the > required changes. > > I guess the next question is, who wants to update the specification? > > -Julia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Sun Sep 30 00:42:09 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sat, 29 Sep 2018 19:42:09 -0500 Subject: [openstack-dev] [neutron][stadium][networking] Seeking proposals for non-voting Stadium projects in Neutron check queue Message-ID: Dear networking Stackers, During the recent PTG in Denver, we discussed measures to prevent patches merged in the Neutron repo breaking Stadium and related networking projects in general. We decided to implement the following: 1) For Stadium projects, we want to add non-voting jobs to the Neutron check queue 2) For non stadium projects, we are inviting them to add 3rd party CI jobs The next step is for each project to propose the jobs that they want to run against Neutron patches. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshele at mellanox.com Sun Sep 30 06:25:58 2018 From: moshele at mellanox.com (Moshe Levi) Date: Sun, 30 Sep 2018 06:25:58 +0000 Subject: [openstack-dev] [ironic][neutron] SmartNics with Ironic In-Reply-To: References: Message-ID: Hi Julia, I don't mind to update the ironic spec [1]. Unfortunately, I wasn't in the PTG but I had a sync meeting with Isuku. As I see it there is 2 use-cases: 1. Running the neutron ovs agent in the smartnic 2. Running the neutron super ovs agent which manage the ovs running on the smartnic. It seem that most of the discussion was around the second use-case. This is my understanding on the ironic neutron PTG meeting: 1. Ironic cores don't want to change the deployment interface as proposed in [1]. 2. We should a new network_interface for use case 2. But what about the first use case? Should it be a new network_interface as well? 3. We should delay the port binding until the baremetal is powered on the ovs is running. * For the first use case I was thinking to change the neutron server to just keep the port binding information in the neutron DB. Then when the neutron ovs agent is a live it will retrieve all the baremeal port , add them to the ovsdb and start the neutron ovs agent fullsync. * For the second use case the agent is alive so the agent itself can monitor the ovsdb of the baremetal and configure it when it up 4. How to notify that neutron agent successfully/unsuccessfully bind the port ? * In both use-cases we should use neutron-ironic notification to make sure the port binding was done successfully. Is my understanding correct? [1] - https://review.openstack.org/#/c/582767/ From: Miguel Lavalle Sent: Sunday, September 30, 2018 3:20 AM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [ironic][neutron] SmartNics with Ironic Hi, Yes, this also matches the recollection of the joint conversation in Denver. Please look at the "Ironic x-project discussion - Smartnics" section in http://lists.openstack.org/pipermail/openstack-dev/2018-September/135032.html Regards Miguel On Thu, Sep 27, 2018 at 1:31 PM Julia Kreger > wrote: Greetings everyone, Now that the PTG is over, I would like to go ahead and get the specification that was proposed to ironic-specs updated to represent the discussions that took place at the PTG. A few highlights from my recollection: * Ironic being the source of truth for the hardware configuration for the neutron agent to determine where to push configuration to. This would include the address and credential information (certificates right?). * The information required is somehow sent to neutron (possibly as part of the binding profile, which we could send at each time port actions are requested by Ironic.) * The Neutron agent running on the control plane connects outbound to the smartnic, using information supplied to perform the appropriate network configuration. * In Ironic, this would likely be a new network_interface driver module, with some additional methods that help facilitate the work-flow logic changes needed in each deploy_interface driver module. * Ironic would then be informed or gain awareness that the configuration has been completed and that the deployment can proceed. (A different spec has been proposed regarding this.) I have submitted a forum session based upon this and the agreed upon goal at the PTG was to have the ironic spec written up to describe the required changes. I guess the next question is, who wants to update the specification? -Julia __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Sun Sep 30 15:33:45 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sun, 30 Sep 2018 10:33:45 -0500 Subject: [openstack-dev] [nova][python-novaclient] A Test issue in python-novaclient. In-Reply-To: <004901d45869$f0713b70$d153b250$@126.com> References: <004901d45869$f0713b70$d153b250$@126.com> Message-ID: <6796d5e5-614b-2cb9-3fa3-24cfe5fe0978@gmail.com> On 9/29/2018 10:01 PM, Tao Li wrote: > I found this test is added about ten days ago in this patch > https://review.openstack.org/#/c/599276/, > > I checked it and don’t know why it failed. I think my commit shouldn’t > cause this issue. So do you any suggestion to me? > Yes it must be an intermittent race bug introduced by that change for the 2.66 microversion. Since it deals with filtering based on time, we might not have a time window that is big enough (we expect to get a result of changes < $before but are getting <= $before). http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22%7C%20%20%20%20%20testtools.matchers._impl.MismatchError%3A%20%5B'create'%5D%20!%3D%20%5B'create'%2C%20'stop'%5D%5C%22%20AND%20tags%3A%5C%22console%5C%22&from=7d Please report a bug against python-novaclient. The 2.66 test is based on a similar changes_since test, so we should see why they are behaving differently. -- Thanks, Matt From mriedemos at gmail.com Sun Sep 30 16:02:17 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sun, 30 Sep 2018 11:02:17 -0500 Subject: [openstack-dev] Placement extraction update Message-ID: <29d6e33d-78e9-73a7-a192-18c8b6b15c25@gmail.com> I finally got a passing neutron-grenade run in change [1]. That's the grenade change which populates the placement DB in Stein from the placement-related table contents of the nova_api DB from Rocky. It also writes out the placement.conf file for Stein before starting the Stein services. As a result, I'm +2 on Dan's mysql-migrate-db.sh script [2]. The grenade change is also dependent on three other changes for neutron [3], ironic [4] and heat [5] grenade jobs to require the openstack/placement project when zuul/devstack-gate clones its required projects before running grenade.sh. Those are just the related project grenade jobs that are hit as part of the grenade patch. There could be others I'm missing, which means projects might need to update their grenade job definitions after the grenade change merges. It looks like that could be quite a few projects [6]. If the infra/QA teams have a better idea of how to require openstack/placement in stein+ only, I'm all ears. Maybe that's some conditional branch logic we can hack into devstack-gate [7] like we do for neutron? [8] [1] https://review.openstack.org/#/c/604454/ [2] https://review.openstack.org/#/c/603234/ [3] https://review.openstack.org/#/c/604458/ [4] https://review.openstack.org/#/c/606850/ [5] https://review.openstack.org/#/c/606851/ [6] http://codesearch.openstack.org/?q=export%20PROJECTS%3D%22openstack-dev%5C%2Fgrenade%20%5C%24PROJECTS%22&i=nope&files=&repos= [7] https://github.com/openstack-infra/devstack-gate/blob/95fa4343104eafa655375cce3546d27139211d13/devstack-vm-gate-wrap.sh#L138 [8] https://github.com/openstack-infra/devstack-gate/blob/95fa4343104eafa655375cce3546d27139211d13/devstack-vm-gate-wrap.sh#L195 -- Thanks, Matt From mriedemos at gmail.com Sun Sep 30 16:14:17 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sun, 30 Sep 2018 11:14:17 -0500 Subject: [openstack-dev] Placement extraction update In-Reply-To: <29d6e33d-78e9-73a7-a192-18c8b6b15c25@gmail.com> References: <29d6e33d-78e9-73a7-a192-18c8b6b15c25@gmail.com> Message-ID: On 9/30/2018 11:02 AM, Matt Riedemann wrote: > Maybe that's some conditional branch logic we can hack into > devstack-gate [7] like we do for neutron? [8] I'm hoping this works: https://review.openstack.org/#/c/606853/ -- Thanks, Matt From hongbin034 at gmail.com Sun Sep 30 18:55:34 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sun, 30 Sep 2018 14:55:34 -0400 Subject: [openstack-dev] [kolla][octavia] Containerize the amphora-agent Message-ID: Hi all, I am working on the Zun integration for Octavia. I did a preliminary research and it seems what we need to do is to containerize the amphora agent that was packaged and shipped by a VM image. I wonder if anyone already has a containerized docker image that I can leverage. If not, I will create one. Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Sun Sep 30 21:42:42 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 30 Sep 2018 16:42:42 -0500 Subject: [openstack-dev] [neutron][neutron-lib] Seeking neutron-lib developers Message-ID: Dear Neutron community, As everybody knows, neutron-lib "is an OpenStack library project used by Neutron, Advanced Services, and third-party projects that aims to provide common functionality across all such consumers" ( https://docs.openstack.org/neutron-lib/latest/). In general terms, the effort in neutron-lib consists of two steps: 1. Extract common functionality from Neutron and re-home it in neutron-lib. 2. Consume the re-homed functionality by Neutron and all its related networking projects. This has been an on-going effort for several cycles and there is still a lot do. During the recent Stein PTG in Denver, the team agreed to send a message to the mailing list to invite community developers to help with this effort. If you are interested in participating, please add your name to this etherpad: https://etherpad.openstack.org/p/neutron-lib-volunteers-and-punch-list. Once we have a group of volunteers, Boden Russel (itc: boden) has agreed to create a to-do list. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: